Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921939
O. Ramadan
Graphene, which is considered to be an infinitely thin two-dimension material, is a very promising optoelectronic material and has received much attention due to its outstanding electrical and optical properties. This paper describes an efficient message-passing interface (MPI) parallel implementation of the finite difference time domain (FDTD) algorithm for modeling infinite Graphene sheet simulations. The algorithm, which is based on the domain decomposition approach, reduces the number of field components to be exchanged between the neighboring processors as compared with the conventional parallel MPI FDTD implementation. Numerical simulations are included to show the effectiveness of the proposed parallel algorithm.
{"title":"Efficient parallel FDTD algorithm for modeling infinite graphene sheet simulations","authors":"O. Ramadan","doi":"10.1109/IACS.2017.7921939","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921939","url":null,"abstract":"Graphene, which is considered to be an infinitely thin two-dimension material, is a very promising optoelectronic material and has received much attention due to its outstanding electrical and optical properties. This paper describes an efficient message-passing interface (MPI) parallel implementation of the finite difference time domain (FDTD) algorithm for modeling infinite Graphene sheet simulations. The algorithm, which is based on the domain decomposition approach, reduces the number of field components to be exchanged between the neighboring processors as compared with the conventional parallel MPI FDTD implementation. Numerical simulations are included to show the effectiveness of the proposed parallel algorithm.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127199377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921947
M. Itani, C. Roast, S. Al-Khayatt
Different Natural Language Processing (NLP) applications such as text categorization, machine translation, etc., need annotated corpora to check quality and performance. Similarly, sentiment analysis requires annotated corpora to test the performance of classifiers. Manual annotation performed by native speakers is used as a benchmark test to measure how accurate a classifier is. In this paper we summarise currently available Arabic corpora and describe work in progress to build, annotate, and use Arabic corpora consisting of Facebook (FB) posts. The distinctive nature of these corpora is that they are based on posts written in Dialectal Arabic (DA) not following specific grammatical or spelling standards. The corpora are annotated with five labels (positive, negative, dual, neutral, and spam). In addition to building the corpora, the paper illustrates how manual tagging can be used to extract opinionated words and phrases to be used in a lexicon-based classifier.
{"title":"Corpora for sentiment analysis of Arabic text in social media","authors":"M. Itani, C. Roast, S. Al-Khayatt","doi":"10.1109/IACS.2017.7921947","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921947","url":null,"abstract":"Different Natural Language Processing (NLP) applications such as text categorization, machine translation, etc., need annotated corpora to check quality and performance. Similarly, sentiment analysis requires annotated corpora to test the performance of classifiers. Manual annotation performed by native speakers is used as a benchmark test to measure how accurate a classifier is. In this paper we summarise currently available Arabic corpora and describe work in progress to build, annotate, and use Arabic corpora consisting of Facebook (FB) posts. The distinctive nature of these corpora is that they are based on posts written in Dialectal Arabic (DA) not following specific grammatical or spelling standards. The corpora are annotated with five labels (positive, negative, dual, neutral, and spam). In addition to building the corpora, the paper illustrates how manual tagging can be used to extract opinionated words and phrases to be used in a lexicon-based classifier.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128662148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921941
Hana Mejdi, Besma Jedli, S. Hasnaoui
The need to exchange large amounts of information in all fields especially in the automotive one, with minimal space requirements, weight, complexity of conjunction and cost, perform the development of several automotive networks such as Local Interconnect Network (LIN), Controller Area Network (CAN), Time Triggered Protocol (CAN) and FlexRay. The CAN is currently the most used, but it has many limits. It is not strictly deterministic and it is not the ideal protocol for application that needs high degree of safety. These drawbacks have been the starting points for the development of the FlexRay protocol which is more efficient than CAN protocol. We adopted a new way to achieve the design and the implementation of the FlexRay Controller which consists of the translation of the (Specification and Description Language) SDL diagrams into StateFlow diagrams. With these diagrams, we generate the VHDL code of the controller which will be implemented in an FPGA to create the hardware chip for FlexRay. We had used ModelSim to verify the correctness of the designed blocks. We give in this paper the adopted procedure for the Macrotick block Generation and its verification by ModelSim.
{"title":"Designing a FlexRay controller — From SDL to StateFlow and Simulink blocks: Generation and verification","authors":"Hana Mejdi, Besma Jedli, S. Hasnaoui","doi":"10.1109/IACS.2017.7921941","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921941","url":null,"abstract":"The need to exchange large amounts of information in all fields especially in the automotive one, with minimal space requirements, weight, complexity of conjunction and cost, perform the development of several automotive networks such as Local Interconnect Network (LIN), Controller Area Network (CAN), Time Triggered Protocol (CAN) and FlexRay. The CAN is currently the most used, but it has many limits. It is not strictly deterministic and it is not the ideal protocol for application that needs high degree of safety. These drawbacks have been the starting points for the development of the FlexRay protocol which is more efficient than CAN protocol. We adopted a new way to achieve the design and the implementation of the FlexRay Controller which consists of the translation of the (Specification and Description Language) SDL diagrams into StateFlow diagrams. With these diagrams, we generate the VHDL code of the controller which will be implemented in an FPGA to create the hardware chip for FlexRay. We had used ModelSim to verify the correctness of the designed blocks. We give in this paper the adopted procedure for the Macrotick block Generation and its verification by ModelSim.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133746106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921938
M. Shehab, Abdullah A. Ghadawi, L. Alawneh, M. Al-Ayyoub, Y. Jararweh
Bioinformatics is an interdisciplinary field that applies techniques from computer science, statistics and engineering to guide in the study of large biological data. Protein structure and sequence analysis is very important in bioinformatics mainly in understanding cellular processes which helps in simplifying the development of drugs for metabolic pathways. Protein sequence alignment is a technique that is concerned with identifying the similarities among different protein structures in order to discover the relationships among them. These kinds of techniques are computationally extensive which hinders their applicability. In this paper, we propose a parallel approach to speed up the computational time of two sequence alignment algorithms using a hybrid implementation that combines the power of multicore CPUs and that of contemporary GPUs. Our study shows that the hybrid approach solves the problem much faster than its sequential counterpart.
{"title":"A hybrid CPU-GPU implementation to accelerate multiple pairwise protein sequence alignment","authors":"M. Shehab, Abdullah A. Ghadawi, L. Alawneh, M. Al-Ayyoub, Y. Jararweh","doi":"10.1109/IACS.2017.7921938","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921938","url":null,"abstract":"Bioinformatics is an interdisciplinary field that applies techniques from computer science, statistics and engineering to guide in the study of large biological data. Protein structure and sequence analysis is very important in bioinformatics mainly in understanding cellular processes which helps in simplifying the development of drugs for metabolic pathways. Protein sequence alignment is a technique that is concerned with identifying the similarities among different protein structures in order to discover the relationships among them. These kinds of techniques are computationally extensive which hinders their applicability. In this paper, we propose a parallel approach to speed up the computational time of two sequence alignment algorithms using a hybrid implementation that combines the power of multicore CPUs and that of contemporary GPUs. Our study shows that the hybrid approach solves the problem much faster than its sequential counterpart.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129606724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921943
Yahya M. Tashtoush, Majd Al-Soud, Reema M. AbuJazoh, Manar Al-Frehat
Recently, Arab and Muslim researchers have given a great attention to retrieve information and search about knowledge in the noble Quran. During the past years a lot of websites and applications offered a number of methods to search throughout the noble Quran, some offered a syntactic search, others offered a semantic search and some others offered both. The noble Quran has a special style and metaphorical nature. Furthermore, the Arabic language of the noble Quran has a complicated structure that needs an exceptional extra attention on the issues of searching as well as information retrieval rather than English or any other language. This paper proposes a new ontological modeling that models the human social relations in the noble Quran by employing Web Ontology Language (OWL) as well as Resource Description Framework (RDF). This paper methodology involves a descriptive identification of the human relations related concepts that are described in the Noble Quran with identifying the relations among them. The concept ontological model, in this work, mainly built to support Arabic, ArabEzi (popular chat language) and English languages. As a result, SPARQL queries and DL queries are used in the ontology model to retrieve Quran domains, concepts and Verses in Arabic language. Hence, this work will help in the noble Quran searching and retrieving information.
{"title":"The noble quran Arabic ontology: Domain ontological model and evaluation of human and social relations","authors":"Yahya M. Tashtoush, Majd Al-Soud, Reema M. AbuJazoh, Manar Al-Frehat","doi":"10.1109/IACS.2017.7921943","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921943","url":null,"abstract":"Recently, Arab and Muslim researchers have given a great attention to retrieve information and search about knowledge in the noble Quran. During the past years a lot of websites and applications offered a number of methods to search throughout the noble Quran, some offered a syntactic search, others offered a semantic search and some others offered both. The noble Quran has a special style and metaphorical nature. Furthermore, the Arabic language of the noble Quran has a complicated structure that needs an exceptional extra attention on the issues of searching as well as information retrieval rather than English or any other language. This paper proposes a new ontological modeling that models the human social relations in the noble Quran by employing Web Ontology Language (OWL) as well as Resource Description Framework (RDF). This paper methodology involves a descriptive identification of the human relations related concepts that are described in the Noble Quran with identifying the relations among them. The concept ontological model, in this work, mainly built to support Arabic, ArabEzi (popular chat language) and English languages. As a result, SPARQL queries and DL queries are used in the ontology model to retrieve Quran domains, concepts and Verses in Arabic language. Hence, this work will help in the noble Quran searching and retrieving information.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133362235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921988
Ahmed Ibrahim, M. Youssef, E. Fakharany
Recently the No-SQL databases has been popularly used in the mobile and web applications. There are several types of No-SQL databases such as columnar, key-value, graph databases and finally the document store database which is efficient and has more dynamic queries than the normal RDBMS. This paper proposes an automatic method to map the data from the relational database to document store database (MONGODB). This is true for both structured and unstructured data such as word document files. The proposed method also has the capability of extracting the keywords from Blobs “Binary Large Object” Stored in relational databases to be mapped inside the MONGODB. The results show a complexity of order n where n is the number of records processed when it comes to the performance in creating the output from mapping the relational database to document store database for both types of data.
近年来,No-SQL数据库在移动和web应用中得到了广泛的应用。有几种类型的No-SQL数据库,如列式数据库、键值数据库、图形数据库和文档存储数据库,它比普通的RDBMS效率高,具有更多的动态查询。本文提出了一种将关系数据库中的数据自动映射到文档存储数据库(MONGODB)的方法。对于结构化和非结构化数据(如word文档文件)都是如此。该方法还具有从存储在关系数据库中的Blobs“Binary Large Object”中提取关键字并映射到MONGODB中的能力。结果显示复杂度为n阶,其中n是在为两种类型的数据创建从关系数据库映射到文档存储数据库的输出时处理的记录数量。
{"title":"Transforming RDB with BLOB fields to MongoDB","authors":"Ahmed Ibrahim, M. Youssef, E. Fakharany","doi":"10.1109/IACS.2017.7921988","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921988","url":null,"abstract":"Recently the No-SQL databases has been popularly used in the mobile and web applications. There are several types of No-SQL databases such as columnar, key-value, graph databases and finally the document store database which is efficient and has more dynamic queries than the normal RDBMS. This paper proposes an automatic method to map the data from the relational database to document store database (MONGODB). This is true for both structured and unstructured data such as word document files. The proposed method also has the capability of extracting the keywords from Blobs “Binary Large Object” Stored in relational databases to be mapped inside the MONGODB. The results show a complexity of order n where n is the number of records processed when it comes to the performance in creating the output from mapping the relational database to document store database for both types of data.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123465459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921936
Raúl Moreno, Enrique Arias, José L. Sánchez, D. Cazorla, Jesús Garrido, J. González-Piqueras
HidroMORE software was developed in the Remote Sensing and Geographic Information Systems (GIS) section from the University of Castilla-la Mancha to extend the Evapotranspiration assessment to a regional scale, implementing the FAO-56 methodology and the assimilation of the basal crop coefficient from Normalized Difference Vegetation Index (NDVI) images calculated from satellite images. However, when this software deals with high dimension images, the performance greatly decays. Currently, HidroMORE is being required for carring out calculations that result unapproachable in its current state. In this work HidroMORE 2 is presented where a High Performance Computing approach has been considered to manage the complexity of HidroMORE software. The work presented here takes into account two main aspects in order to improve the performance: improvements on input/output operations, that is, a better manage of hard disk operations; and on the other hand the use of Parallel Computing by exploiting current computer architectures, in particular, multicore architectures.
{"title":"HidroMORE 2: An optimized and parallel version of HidroMORE","authors":"Raúl Moreno, Enrique Arias, José L. Sánchez, D. Cazorla, Jesús Garrido, J. González-Piqueras","doi":"10.1109/IACS.2017.7921936","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921936","url":null,"abstract":"HidroMORE software was developed in the Remote Sensing and Geographic Information Systems (GIS) section from the University of Castilla-la Mancha to extend the Evapotranspiration assessment to a regional scale, implementing the FAO-56 methodology and the assimilation of the basal crop coefficient from Normalized Difference Vegetation Index (NDVI) images calculated from satellite images. However, when this software deals with high dimension images, the performance greatly decays. Currently, HidroMORE is being required for carring out calculations that result unapproachable in its current state. In this work HidroMORE 2 is presented where a High Performance Computing approach has been considered to manage the complexity of HidroMORE software. The work presented here takes into account two main aspects in order to improve the performance: improvements on input/output operations, that is, a better manage of hard disk operations; and on the other hand the use of Parallel Computing by exploiting current computer architectures, in particular, multicore architectures.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120949092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921937
Khaled Balhaf, M. Alsmirat, M. Al-Ayyoub, Y. Jararweh, M. Shehab
String matching problems such as sequence alignment is one of the fundamental problems in many computer since fields such as natural language processing (NLP) and bioinformatics. Many algorithms have been proposed in the literature to address this problem. Some of these algorithms compute the edit distance between the two strings to perform the matching. However, these algorithms usually require long execution time. Many researches use high performance computing to reduce the execution time of many string matching algorithms. In this paper, we use the CUDA based Graphics Processing Unit (GPU) and the newly introduced Unified Memory(UM) to speed up the most common algorithms to compute the edit distance between two string. These algorithms are the Levenshtein and Damerau distance algorithms. Our results show that using GPU to implement the Levenshtein and Damerau distance algorithms improvements their execution times of about 11X and 12X respectively when compared to the sequential implementation. And an improvement of about 61X and 71X respectively can be achieved when GPU is used with unified memory.
{"title":"Accelerating Levenshtein and Damerau edit distance algorithms using GPU with unified memory","authors":"Khaled Balhaf, M. Alsmirat, M. Al-Ayyoub, Y. Jararweh, M. Shehab","doi":"10.1109/IACS.2017.7921937","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921937","url":null,"abstract":"String matching problems such as sequence alignment is one of the fundamental problems in many computer since fields such as natural language processing (NLP) and bioinformatics. Many algorithms have been proposed in the literature to address this problem. Some of these algorithms compute the edit distance between the two strings to perform the matching. However, these algorithms usually require long execution time. Many researches use high performance computing to reduce the execution time of many string matching algorithms. In this paper, we use the CUDA based Graphics Processing Unit (GPU) and the newly introduced Unified Memory(UM) to speed up the most common algorithms to compute the edit distance between two string. These algorithms are the Levenshtein and Damerau distance algorithms. Our results show that using GPU to implement the Levenshtein and Damerau distance algorithms improvements their execution times of about 11X and 12X respectively when compared to the sequential implementation. And an improvement of about 61X and 71X respectively can be achieved when GPU is used with unified memory.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127191285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921935
Yuan F. Zheng, J. Bentahar, E. Benkhelifa
A typical robotic system includes a few key components, which are motor, speed reducer, drive, controller, power supply, and mechanical frame, in both robot manipulators and/or mobile robots (wheeled or legged). While all the components are necessary, three individuals are critical and stand out, which are speed reducer, motor, and battery (especially Lithium-ion battery), respectively. The reason for those is twofold: failure of any of them will fail the entire system, and the cost of the three takes more than 80% of the total cost. In the robotic field, we have seen numerous works studying/improving the performance of a robot at the system level but very few focusing at the component level, especially when a robot operates in the nuclear environment. Radiation damage to electronic devices (including H-bridge drives) has long been studied in both theory and experiment. The study on the three critical components has received much less attention. In this talk we report our current studies on the radiation impact to harmonic drive (a speed reducer used in high-end robotic systems), brushless DC motor, and Lithium-ion battery. Our studies have invented new approaches for evaluating the performance of individual components, and reveal how the radiation will affect the performance of the latter. Both theoretical and experimental studies will be presented. The degraded performance of the components may impact or even fail the entire system, regardless of it being a robot manipulator or a mobile robot. We will therefore discuss how to design and develop radiation-hardened components, which can endure a robot in radiation-filled environments.
{"title":"Keynote Speech 1: Reactions of robot critical components to nuclear radiation","authors":"Yuan F. Zheng, J. Bentahar, E. Benkhelifa","doi":"10.1109/IACS.2017.7921935","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921935","url":null,"abstract":"A typical robotic system includes a few key components, which are motor, speed reducer, drive, controller, power supply, and mechanical frame, in both robot manipulators and/or mobile robots (wheeled or legged). While all the components are necessary, three individuals are critical and stand out, which are speed reducer, motor, and battery (especially Lithium-ion battery), respectively. The reason for those is twofold: failure of any of them will fail the entire system, and the cost of the three takes more than 80% of the total cost. In the robotic field, we have seen numerous works studying/improving the performance of a robot at the system level but very few focusing at the component level, especially when a robot operates in the nuclear environment. Radiation damage to electronic devices (including H-bridge drives) has long been studied in both theory and experiment. The study on the three critical components has received much less attention. In this talk we report our current studies on the radiation impact to harmonic drive (a speed reducer used in high-end robotic systems), brushless DC motor, and Lithium-ion battery. Our studies have invented new approaches for evaluating the performance of individual components, and reveal how the radiation will affect the performance of the latter. Both theoretical and experimental studies will be presented. The degraded performance of the components may impact or even fail the entire system, regardless of it being a robot manipulator or a mobile robot. We will therefore discuss how to design and develop radiation-hardened components, which can endure a robot in radiation-filled environments.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129719584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/IACS.2017.7921944
Mohammad Alhaj, Gilbert Arbez, L. Peyton
The typical approach to designing embedded systems manages the specification and design of the hardware and software separately. HW/SW Co-design is used, in embedded computing, to allow the hardware and the software to be designed and implemented together and make sure that the non-functional properties are met. Behavior-driven development (BDD) is an agile software development approach that spurs collaboration of project stakeholders to ensure the right software is developed to meet their needs. BDD describes the behavior of the system as executable user stories and focuses on how the system behaves for users interact with the system. In this paper, we introduce an approach that integrates BDD with HW/SW Co-design. The approach provides the ability to describe the behavior of the software as executable user stories in a HW/SW Co-design environment. The approach is evaluated using a renewable energy project in collaboration with a private company in Canada to build a system for autonomous load management of self-forming renewable energy nanogrids.
{"title":"Using behaviour-driven development with hardware-software co-design for autonomous load management","authors":"Mohammad Alhaj, Gilbert Arbez, L. Peyton","doi":"10.1109/IACS.2017.7921944","DOIUrl":"https://doi.org/10.1109/IACS.2017.7921944","url":null,"abstract":"The typical approach to designing embedded systems manages the specification and design of the hardware and software separately. HW/SW Co-design is used, in embedded computing, to allow the hardware and the software to be designed and implemented together and make sure that the non-functional properties are met. Behavior-driven development (BDD) is an agile software development approach that spurs collaboration of project stakeholders to ensure the right software is developed to meet their needs. BDD describes the behavior of the system as executable user stories and focuses on how the system behaves for users interact with the system. In this paper, we introduce an approach that integrates BDD with HW/SW Co-design. The approach provides the ability to describe the behavior of the software as executable user stories in a HW/SW Co-design environment. The approach is evaluated using a renewable energy project in collaboration with a private company in Canada to build a system for autonomous load management of self-forming renewable energy nanogrids.","PeriodicalId":180504,"journal":{"name":"2017 8th International Conference on Information and Communication Systems (ICICS)","volume":"23 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113934689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}