Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397214
Y. Gheraibia, A. Moussaoui, Luís Silva Azevedo, D. Parker, Y. Papadopoulos, M. Walker
Many emerging safety standards use the concept of Safety Integrity Levels (SILs) for guiding designers on how to specify system safety requirements and then allocate these requirements to elements of the system architecture. These standards include the new automotive safety standard ISO 26262 in which SILs are called automotive SILs (or ASILs) and these will be used to illustrate the application of the techniques presented in this paper. In this paper, we propose a new approach in which the allocation of ASILs is performed by a new nature-inspired metaheuristic known as Penguins Search Optimisation Algorithm (PeSOA). PeSOA mimics the collaborative hunting strategy of penguins, using the metaphor of oxygen reserves as a search intensification operator. This allows the penguins to preserve energy, consuming it only in areas of the search space that are rich in good solutions. The performance of the approach is evaluated by applying it to a benchmark hybrid braking system case study, demonstrating performance that is an improvement to those reported in the literature.
{"title":"Can aquatic flightless birds allocate Automotive Safety requirements?","authors":"Y. Gheraibia, A. Moussaoui, Luís Silva Azevedo, D. Parker, Y. Papadopoulos, M. Walker","doi":"10.1109/INTELCIS.2015.7397214","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397214","url":null,"abstract":"Many emerging safety standards use the concept of Safety Integrity Levels (SILs) for guiding designers on how to specify system safety requirements and then allocate these requirements to elements of the system architecture. These standards include the new automotive safety standard ISO 26262 in which SILs are called automotive SILs (or ASILs) and these will be used to illustrate the application of the techniques presented in this paper. In this paper, we propose a new approach in which the allocation of ASILs is performed by a new nature-inspired metaheuristic known as Penguins Search Optimisation Algorithm (PeSOA). PeSOA mimics the collaborative hunting strategy of penguins, using the metaphor of oxygen reserves as a search intensification operator. This allows the penguins to preserve energy, consuming it only in areas of the search space that are rich in good solutions. The performance of the approach is evaluated by applying it to a benchmark hybrid braking system case study, demonstrating performance that is an improvement to those reported in the literature.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"157 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76117617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397275
M. F. Ali, O. Batarfi, A. Bashar
This, paper presents a novel approach towards a comprehensive analysis of various simulation-based tools to test and measure the Cloud Datacenter performance, scalability, robustness and complexity. There are different Cloud Datacenter resources in cloud Computing Infrastructure like Virtual Machine, CPU, RAM, SAN, LAN and WAN topologies. The server machines need to be analyzed for their extent of utilization in terms of energy and service to clients in cloud computing. We have analyzed various Cloud resources using CloudSim, CloudReports and Cloud Analyst tools. Resources provisioning, Cloud Management, Load Balancing, Robustness and Cloud Scalability are the primary scope of work discuss in this paper. In this regards some Simulation test results and Simulations are presented in order to compare them with real time scenario to bring the performance and scalability issues into our notice for future directions.
{"title":"A simulation-based comparative study of Cloud Datacenter scalability, robustness and complexity","authors":"M. F. Ali, O. Batarfi, A. Bashar","doi":"10.1109/INTELCIS.2015.7397275","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397275","url":null,"abstract":"This, paper presents a novel approach towards a comprehensive analysis of various simulation-based tools to test and measure the Cloud Datacenter performance, scalability, robustness and complexity. There are different Cloud Datacenter resources in cloud Computing Infrastructure like Virtual Machine, CPU, RAM, SAN, LAN and WAN topologies. The server machines need to be analyzed for their extent of utilization in terms of energy and service to clients in cloud computing. We have analyzed various Cloud resources using CloudSim, CloudReports and Cloud Analyst tools. Resources provisioning, Cloud Management, Load Balancing, Robustness and Cloud Scalability are the primary scope of work discuss in this paper. In this regards some Simulation test results and Simulations are presented in order to compare them with real time scenario to bring the performance and scalability issues into our notice for future directions.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"56 1","pages":"547-551"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76016848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397182
Anu A. Gokhale
Data mining is a process of finding anomalies, implicit patterns, and correlations within large data sets to predict outcomes, or in other words, the search for relationships and global patterns that exist, but are `hidden' among the vast amounts of data. When applied to the educational domain, data mining is a powerful tool that enables better understanding of relationships, structure, patterns, and causal pathways which provide students the cognitive strategies to think critically, make decisions, and solve problems. The talk will discuss the methodology and results of this research, present the extracted knowledge, and describe its importance in the teaching-learning space. Recent developments engineered to capture and store non-cognitive affective-domain features, such as interest and persistence will be addressed. The objective is evidence-centered design and the data mining framework acknowledges that assessments entail different levels of confidence and risk.
{"title":"Mining educational data: A focus on learning analytics","authors":"Anu A. Gokhale","doi":"10.1109/INTELCIS.2015.7397182","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397182","url":null,"abstract":"Data mining is a process of finding anomalies, implicit patterns, and correlations within large data sets to predict outcomes, or in other words, the search for relationships and global patterns that exist, but are `hidden' among the vast amounts of data. When applied to the educational domain, data mining is a powerful tool that enables better understanding of relationships, structure, patterns, and causal pathways which provide students the cognitive strategies to think critically, make decisions, and solve problems. The talk will discuss the methodology and results of this research, present the extracted knowledge, and describe its importance in the teaching-learning space. Recent developments engineered to capture and store non-cognitive affective-domain features, such as interest and persistence will be addressed. The objective is evidence-centered design and the data mining framework acknowledges that assessments entail different levels of confidence and risk.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"16 1","pages":"1-1"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77308251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397222
Taroub Ahmed Mustafa Sa'ed
Cloud computing is an evolution in which IT consumption and delivery are made available in a self-service fashion via the Internet or internal network, with a flexible pay-as-you-go business model and requires a highly efficient and scalable architecture. It can mainly provide three cloud services: storage as a service, processing as a service and software as a service, while infrastructure is very scalable and cost-effective to run high-performance and enterprise computing and web applications. Another powerful challenge in cloud computing is in Mobile cloud computing, which is: combining mobile computing and cloud computing technology. In this paper we will review news techniques to achieve Green computing and discuss the benefits of using cloud services for mobile environment.
{"title":"Toward green and mobile cloud computing","authors":"Taroub Ahmed Mustafa Sa'ed","doi":"10.1109/INTELCIS.2015.7397222","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397222","url":null,"abstract":"Cloud computing is an evolution in which IT consumption and delivery are made available in a self-service fashion via the Internet or internal network, with a flexible pay-as-you-go business model and requires a highly efficient and scalable architecture. It can mainly provide three cloud services: storage as a service, processing as a service and software as a service, while infrastructure is very scalable and cost-effective to run high-performance and enterprise computing and web applications. Another powerful challenge in cloud computing is in Mobile cloud computing, which is: combining mobile computing and cloud computing technology. In this paper we will review news techniques to achieve Green computing and discuss the benefits of using cloud services for mobile environment.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"50 1","pages":"203-209"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84150968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397281
Mohamed Soliman Halawa, Essam M. Ramzy Hamed, M. E. Shehab
E-learning has become an essential factor in the modern educational system. In today's diverse student population, E-learning must recognize the differences in student personalities to make the learning process more personalized, and to help overcome “one-size-fits-all” learning model. Each learner has a different learning style and different individual needs. This study proposes a data-driven recommendation model which uses the student's personality and learning style in order to recommend the learning course presentation or objects way. The data model identifies both the student personality type and the dominant preference based on the Myers-Briggs Type Indicator (MBTI) theory. The proposed model utilizes data from student engagement with the learning management system (Moodle) and the social network, Facebook. The model helps students become aware of their personality, which in turn makes them more efficient in their study habits. The model also provides vital information for educators, equipping them with a better understanding of each student's personality. The predicted personality preference was used to match it with the corresponding learning styles from Kolb's model. An experiment of the recommendation model was tested on a sample of students, and at the end a t-test was applied on some collected behavior from our student sample dataset to validate the model. The results indicate an improvement in the students' engagement and commitment to the course after applying the research data-driven model on the e-learning system.
{"title":"Personalized E-learning recommendation model based on psychological type and learning style models","authors":"Mohamed Soliman Halawa, Essam M. Ramzy Hamed, M. E. Shehab","doi":"10.1109/INTELCIS.2015.7397281","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397281","url":null,"abstract":"E-learning has become an essential factor in the modern educational system. In today's diverse student population, E-learning must recognize the differences in student personalities to make the learning process more personalized, and to help overcome “one-size-fits-all” learning model. Each learner has a different learning style and different individual needs. This study proposes a data-driven recommendation model which uses the student's personality and learning style in order to recommend the learning course presentation or objects way. The data model identifies both the student personality type and the dominant preference based on the Myers-Briggs Type Indicator (MBTI) theory. The proposed model utilizes data from student engagement with the learning management system (Moodle) and the social network, Facebook. The model helps students become aware of their personality, which in turn makes them more efficient in their study habits. The model also provides vital information for educators, equipping them with a better understanding of each student's personality. The predicted personality preference was used to match it with the corresponding learning styles from Kolb's model. An experiment of the recommendation model was tested on a sample of students, and at the end a t-test was applied on some collected behavior from our student sample dataset to validate the model. The results indicate an improvement in the students' engagement and commitment to the course after applying the research data-driven model on the e-learning system.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"109 1","pages":"578-584"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78410061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397188
Y. Papadopoulos
The technologies of model-based design and dependability analysis in the design of safety critical systems, including software intensive systems, have advanced in recent years. Much of this development can be attributed to the application of advances in formal logic and its application to verification of systems. In parallel, bio-inspired technologies have shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. However, we have not yet seen the emergence of a design paradigm that combines effectively and throughout the design lifecycle these two techniques which are schematically founded on the two pillars of formal logic and biology. Such a design paradigm would apply these techniques synergistically and systematically from the early stages of design to enable optimal refinement of new designs which can be driven effectively by dependability requirements. In my talk I discuss such a model-centric paradigm for the design of systems that brings these technologies together to realise their combined potential benefits, and discuss its embryonic support in the HiP-HOPS (www.hip-hops.eu) dependability analysis and optimisation tool.
{"title":"Metaheuristics for the design of safety critical systems: A synthesis of logic and biology in system design","authors":"Y. Papadopoulos","doi":"10.1109/INTELCIS.2015.7397188","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397188","url":null,"abstract":"The technologies of model-based design and dependability analysis in the design of safety critical systems, including software intensive systems, have advanced in recent years. Much of this development can be attributed to the application of advances in formal logic and its application to verification of systems. In parallel, bio-inspired technologies have shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. However, we have not yet seen the emergence of a design paradigm that combines effectively and throughout the design lifecycle these two techniques which are schematically founded on the two pillars of formal logic and biology. Such a design paradigm would apply these techniques synergistically and systematically from the early stages of design to enable optimal refinement of new designs which can be driven effectively by dependability requirements. In my talk I discuss such a model-centric paradigm for the design of systems that brings these technologies together to realise their combined potential benefits, and discuss its embryonic support in the HiP-HOPS (www.hip-hops.eu) dependability analysis and optimisation tool.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"61 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87734787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397285
Roaa Elghondakly, Sherin M. Moussa, N. Badr
Requirements-based testing is a testing approach in which test cases are derived from requirements. Requirements represent the initial phase in software developments life cycle. Requirements are considered the basis of any software project. Therefore, any ambiguity in natural language requirements leads to major errors in the coming phases. Moreover, poorly defined requirements may cause software project failure. There exist many software development models as waterfall model, agile model, etc. In this paper, we propose a novel automated approach to generate test cases from requirements. Requirements can be gathered from different models either waterfall model (functional and non-functional) or agile model. SRS documents, non-functional requirements and user stories are parsed and used by the proposed approach to generate test cases in which requirements with different types are covered. The proposed approach uses text mining and symbolic execution methodology for test data generation and validation, where a knowledge base is developed for multi-disciplinary domains.
{"title":"Waterfall and agile requirements-based model for automated test cases generation","authors":"Roaa Elghondakly, Sherin M. Moussa, N. Badr","doi":"10.1109/INTELCIS.2015.7397285","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397285","url":null,"abstract":"Requirements-based testing is a testing approach in which test cases are derived from requirements. Requirements represent the initial phase in software developments life cycle. Requirements are considered the basis of any software project. Therefore, any ambiguity in natural language requirements leads to major errors in the coming phases. Moreover, poorly defined requirements may cause software project failure. There exist many software development models as waterfall model, agile model, etc. In this paper, we propose a novel automated approach to generate test cases from requirements. Requirements can be gathered from different models either waterfall model (functional and non-functional) or agile model. SRS documents, non-functional requirements and user stories are parsed and used by the proposed approach to generate test cases in which requirements with different types are covered. The proposed approach uses text mining and symbolic execution methodology for test data generation and validation, where a knowledge base is developed for multi-disciplinary domains.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"16 1","pages":"607-612"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82272154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397284
S. Sandokji, F. Essa, Mai Fadel
The heterogeneous nature of Graphics processor unit (GPU) - CPU makes it a candidate for coming exascale systems. The cores of GPGPU-which is a cost-effective computing platform-are characterized by long periods of inactive times, which results in the underutilization of the hardware resources. This is due to several factors like the limitation of on-chip memory and register files, the inefficient scheduling mechanisms, and communication bottlenecks GPU - CPU communication. In order to counteract the underutilization of recourses, certain techniques have been proposed. In this research, many architectural and system-level techniques aiming to manage and fully leverage GPU resources are surveyed, compared and evaluated. Also, the significance and challenges of warp scheduler in GPUs are thoroughly discussed. The main purpose of this paper is to provide researchers an insight into warp scheduler techniques for GPUs, as well as motivate them to present more efficient methods for enhance performance via improve thread scheduler in future GPUs.
{"title":"A survey of techniques for warp scheduling in GPUs","authors":"S. Sandokji, F. Essa, Mai Fadel","doi":"10.1109/INTELCIS.2015.7397284","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397284","url":null,"abstract":"The heterogeneous nature of Graphics processor unit (GPU) - CPU makes it a candidate for coming exascale systems. The cores of GPGPU-which is a cost-effective computing platform-are characterized by long periods of inactive times, which results in the underutilization of the hardware resources. This is due to several factors like the limitation of on-chip memory and register files, the inefficient scheduling mechanisms, and communication bottlenecks GPU - CPU communication. In order to counteract the underutilization of recourses, certain techniques have been proposed. In this research, many architectural and system-level techniques aiming to manage and fully leverage GPU resources are surveyed, compared and evaluated. Also, the significance and challenges of warp scheduler in GPUs are thoroughly discussed. The main purpose of this paper is to provide researchers an insight into warp scheduler techniques for GPUs, as well as motivate them to present more efficient methods for enhance performance via improve thread scheduler in future GPUs.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"34 1","pages":"600-606"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83827639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397205
K. Kayser, S. Borkenfeld, Rita Carvalho, G. Kayser
Digital pathology has started to enter the field of tissue - based diagnosis. It offers several applications, especially assistance in routine surgical pathology (tissue - based diagnosis). Diagnosis assistants are programs that assist the routine diagnosis work of a pathologist. Herein we describe how to appropriate design suitable algorithms. Theory: Tissue - based diagnoses derives from a) image content information, b) clinical history, c) expertise of the pathologist, d) knowledge about the disease. It can be transferred to a statistical decision algorithm (neural network, discriminate analysis, factor analysis, ... ). Image content information: Analysis of image content information (ICI) can contribute to medical diagnosis at different levels. The level depends upon the underlying disease (diagnosis) and the derived potential treatment. Pre - analysis algorithms include a) image standardization (shading, magnification, grey value distribution), and evaluation of regions of interest (ROI). ICI is embedded in three coordinates (texture, object, structure). Analysis of objects and structure require external knowledge (cell, nerve, vessel, tree, man, ... ). Texture is solely pixel - based and independent from external knowledge [1,2]. Algorithms: Stereology, syntactic structure analysis and measurement of object features (area, circumference, moments, ... ) are useful tools in combination with external knowledge and appropriate image standardization. Structure and texture parameters require the definition of neighbourhood (Voronoi, O'Caliaghan). Texture features are based upon algorithms that mimic time series analysis and can contribute to ROI definition and to disease classification [1, 2]. Material: Crude diagnoses have been automatically evaluated by the same algorithm from large sets of histological images comprising different organs (colon, lung, pleura, stomach, thyroid (> 1,000 cases). The trials resulted in a reproducible and correct classification (90 - 98 %). Conclusions: The applied algorithms can be combined to construct efficient diagnosis assistants. They can be extended to assistants of more differentiated diagnoses (inclusion of specific stains, clinical history, etc ... ). They can serve to formulate a general theory of "image information".
{"title":"How to define and implement diagnosis assistants in tissue-based diagnosis (surgical pathology): A survey","authors":"K. Kayser, S. Borkenfeld, Rita Carvalho, G. Kayser","doi":"10.1109/INTELCIS.2015.7397205","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397205","url":null,"abstract":"Digital pathology has started to enter the field of tissue - based diagnosis. It offers several applications, especially assistance in routine surgical pathology (tissue - based diagnosis). Diagnosis assistants are programs that assist the routine diagnosis work of a pathologist. Herein we describe how to appropriate design suitable algorithms. Theory: Tissue - based diagnoses derives from a) image content information, b) clinical history, c) expertise of the pathologist, d) knowledge about the disease. It can be transferred to a statistical decision algorithm (neural network, discriminate analysis, factor analysis, ... ). Image content information: Analysis of image content information (ICI) can contribute to medical diagnosis at different levels. The level depends upon the underlying disease (diagnosis) and the derived potential treatment. Pre - analysis algorithms include a) image standardization (shading, magnification, grey value distribution), and evaluation of regions of interest (ROI). ICI is embedded in three coordinates (texture, object, structure). Analysis of objects and structure require external knowledge (cell, nerve, vessel, tree, man, ... ). Texture is solely pixel - based and independent from external knowledge [1,2]. Algorithms: Stereology, syntactic structure analysis and measurement of object features (area, circumference, moments, ... ) are useful tools in combination with external knowledge and appropriate image standardization. Structure and texture parameters require the definition of neighbourhood (Voronoi, O'Caliaghan). Texture features are based upon algorithms that mimic time series analysis and can contribute to ROI definition and to disease classification [1, 2]. Material: Crude diagnoses have been automatically evaluated by the same algorithm from large sets of histological images comprising different organs (colon, lung, pleura, stomach, thyroid (> 1,000 cases). The trials resulted in a reproducible and correct classification (90 - 98 %). Conclusions: The applied algorithms can be combined to construct efficient diagnosis assistants. They can be extended to assistants of more differentiated diagnoses (inclusion of specific stains, clinical history, etc ... ). They can serve to formulate a general theory of \"image information\".","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"23 1","pages":"100-109"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88466959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397241
Maouane Sayih, Anne Bruggemann-Klein, Lyuben Dimitrov
In the development of multi-platform applications, one of the most challenging activities is the creation of user interfaces (UIs). Prototyping cross-platform UIs require a mixture of creative and programming skills, as well as solid specific domain and device familiarity. Moreover, firm knowledge of a multitude of implementation languages and frameworks, often limited to only a small range of platforms and devices, is required. Model-based user interface development offers a solution that supports multi-platform development and promises to reduce developer's effort and time spent for UI prototyping. This paper provides an approach for model- based user interface development using XML technologies. The approach involves the use of a XML-based user interface description language, combined with extensible stylesheet language transformations for model-to-model and model-to-code transformations. Our main target is XForms, and we intend to use as many XML technologies as possible during the development lifecycle of the graphical user interfaces. In addition, we present a Graphical User Interface (GUI) editor called `uimlBuddy' which encapsulates the approach and facilitates end user development for non-programming professionals.
{"title":"Development of model-based User Interfaces with XML technology","authors":"Maouane Sayih, Anne Bruggemann-Klein, Lyuben Dimitrov","doi":"10.1109/INTELCIS.2015.7397241","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397241","url":null,"abstract":"In the development of multi-platform applications, one of the most challenging activities is the creation of user interfaces (UIs). Prototyping cross-platform UIs require a mixture of creative and programming skills, as well as solid specific domain and device familiarity. Moreover, firm knowledge of a multitude of implementation languages and frameworks, often limited to only a small range of platforms and devices, is required. Model-based user interface development offers a solution that supports multi-platform development and promises to reduce developer's effort and time spent for UI prototyping. This paper provides an approach for model- based user interface development using XML technologies. The approach involves the use of a XML-based user interface description language, combined with extensible stylesheet language transformations for model-to-model and model-to-code transformations. Our main target is XForms, and we intend to use as many XML technologies as possible during the development lifecycle of the graphical user interfaces. In addition, we present a Graphical User Interface (GUI) editor called `uimlBuddy' which encapsulates the approach and facilitates end user development for non-programming professionals.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"68 1","pages":"321-327"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88296001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}