Pub Date : 2016-01-14DOI: 10.1080/0740817X.2015.1078525
W. Si, Qingyu Yang, Xin Wu
ABSTRACT Crack propagation subjected to fatigue loading has been widely studied under the assumption that loads are ideally cyclic with a constant amplitude. In the real world, loads are not exactly cyclic, due to either environmental randomness or artificial designs. Loads with amplitudes higher than a threshold limit are referred to as overloads. Researchers have revealed that for some materials, overloads decelerate rather than accelerate the crack propagation process. This effect is called overload retardation. Ignoring overload retardation in reliability analysis can result in a biased estimation of product life. In the literature, however, research on overload retardation mainly focuses on studying its mechanical properties without modeling the effect quantitatively and, therefore, it cannot be incorporated into the reliability analysis of fatigue failures. In this article, we propose a physical–statistical model to quantitatively describe overload retardation considering random errors. A maximum likelihood estimation approach is developed to estimate the model parameters. In addition, a likelihood ratio test is developed to determine whether the tested material has either an overload retardation effect or an overload acceleration effect. The proposed model is further applied to reliability estimation of crack failures when a material has the overload retardation effect. Specifically, two algorithms are developed to calculate the failure time cumulative distribution function and the corresponding pointwise confidence intervals. Finally, designed experiments are conducted to verify and illustrate the developed methods along with simulation studies.
{"title":"A physical–statistical model of overload retardation for crack propagation and application in reliability estimation","authors":"W. Si, Qingyu Yang, Xin Wu","doi":"10.1080/0740817X.2015.1078525","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1078525","url":null,"abstract":"ABSTRACT Crack propagation subjected to fatigue loading has been widely studied under the assumption that loads are ideally cyclic with a constant amplitude. In the real world, loads are not exactly cyclic, due to either environmental randomness or artificial designs. Loads with amplitudes higher than a threshold limit are referred to as overloads. Researchers have revealed that for some materials, overloads decelerate rather than accelerate the crack propagation process. This effect is called overload retardation. Ignoring overload retardation in reliability analysis can result in a biased estimation of product life. In the literature, however, research on overload retardation mainly focuses on studying its mechanical properties without modeling the effect quantitatively and, therefore, it cannot be incorporated into the reliability analysis of fatigue failures. In this article, we propose a physical–statistical model to quantitatively describe overload retardation considering random errors. A maximum likelihood estimation approach is developed to estimate the model parameters. In addition, a likelihood ratio test is developed to determine whether the tested material has either an overload retardation effect or an overload acceleration effect. The proposed model is further applied to reliability estimation of crack failures when a material has the overload retardation effect. Specifically, two algorithms are developed to calculate the failure time cumulative distribution function and the corresponding pointwise confidence intervals. Finally, designed experiments are conducted to verify and illustrate the developed methods along with simulation studies.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"347 - 358"},"PeriodicalIF":0.0,"publicationDate":"2016-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1078525","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-13DOI: 10.1080/0740817X.2015.1133942
J. Bard, Z. Shu, D. Morrice, Luci K. Leykum, R. Poursani
ABSTRACT This article presents a new model for constructing annual block schedules for family medicine residents based on the rules and procedures followed by the Family Medicine Department at the University of Texas Health Science Center in San Antonio (UTHSC-SA). Such residency programs provide 3 years of specialty training for recent medical school graduates. At the beginning of each academic year, each trainee is given an annual block schedule that indicates his or her monthly assignments. These assignments are called rotations and include a variety of experiences, such as pediatric ambulatory care, the emergency room, and inpatient surgery. An important requirement associated with a subset of the rotations is that the residents spend multiple half-day sessions a week in a primary care clinic treating patients from the community. This is a key consideration when constructing the annual block schedules. In particular, one of the primary goals of most residencies is to ensure that the number of residents in clinic each day is approximately the same, so that the number of patients that can be seen each day is also the same. Uniformity provides for a more efficient use of supervisory and staff resources. The difficulty in achieving this goal is that not all rotations allow for clinic duty and that the number of patients that can be seen by a resident each session depends on his or her year of training. When constructing annual block schedules, two high-level sets of variables are available to the program coordinator. The first is the assignment of residents to rotations for each of the 12 blocks, and the second is the (partial) ability to adjust the days on which a resident has clinic duty during each rotation. In approaching the problem, our aim was to redesign the current rotations while giving all residents a 12-month schedule that concurrently (i) balances the number of patients that can be seen in the clinic during each half-day session and (ii) minimizes the number of adjustments necessary to achieve the first objective. The problem was formulated as a mixed-integer program; however, it proved too difficult to solve exactly. As an alternative, several optimization-based heuristics were developed that yielded good feasible solutions. The model and computations are illustrated with data provided by the Family Medicine Department at UTHSC-SA for a typical academic year.
{"title":"Annual block scheduling for family medicine residency programs with continuity clinic considerations","authors":"J. Bard, Z. Shu, D. Morrice, Luci K. Leykum, R. Poursani","doi":"10.1080/0740817X.2015.1133942","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1133942","url":null,"abstract":"ABSTRACT This article presents a new model for constructing annual block schedules for family medicine residents based on the rules and procedures followed by the Family Medicine Department at the University of Texas Health Science Center in San Antonio (UTHSC-SA). Such residency programs provide 3 years of specialty training for recent medical school graduates. At the beginning of each academic year, each trainee is given an annual block schedule that indicates his or her monthly assignments. These assignments are called rotations and include a variety of experiences, such as pediatric ambulatory care, the emergency room, and inpatient surgery. An important requirement associated with a subset of the rotations is that the residents spend multiple half-day sessions a week in a primary care clinic treating patients from the community. This is a key consideration when constructing the annual block schedules. In particular, one of the primary goals of most residencies is to ensure that the number of residents in clinic each day is approximately the same, so that the number of patients that can be seen each day is also the same. Uniformity provides for a more efficient use of supervisory and staff resources. The difficulty in achieving this goal is that not all rotations allow for clinic duty and that the number of patients that can be seen by a resident each session depends on his or her year of training. When constructing annual block schedules, two high-level sets of variables are available to the program coordinator. The first is the assignment of residents to rotations for each of the 12 blocks, and the second is the (partial) ability to adjust the days on which a resident has clinic duty during each rotation. In approaching the problem, our aim was to redesign the current rotations while giving all residents a 12-month schedule that concurrently (i) balances the number of patients that can be seen in the clinic during each half-day session and (ii) minimizes the number of adjustments necessary to achieve the first objective. The problem was formulated as a mixed-integer program; however, it proved too difficult to solve exactly. As an alternative, several optimization-based heuristics were developed that yielded good feasible solutions. The model and computations are illustrated with data provided by the Family Medicine Department at UTHSC-SA for a typical academic year.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"797 - 811"},"PeriodicalIF":0.0,"publicationDate":"2016-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1133942","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-13DOI: 10.1080/0740817X.2015.1125043
Soonhui Lee, B. Nelson
ABSTRACT Many indifference-zone Ranking-and-Selection (R&S) procedures have been invented for choosing the best simulated system. To obtain the desired Probability of Correct Selection (PCS), existing procedures exploit knowledge about the particular combination of system performance measure (e.g., mean, probability, variance, quantile) and assumed output distribution (e.g., normal, exponential, Poisson). In this article, we take a step toward general-purpose R&S procedures that work for many types of performance measures and output distributions, including situations where different simulated alternatives have entirely different output distribution families. There are only two versions of our procedure: with and without the use of common random numbers. To obtain the required PCS we exploit intense computation via bootstrapping, and to mitigate the computational effort we create an adaptive sample-allocation scheme that guides the procedure to quickly reach the necessary sample size. We establish the asymptotic PCS of these procedures under very mild conditions and provide a finite-sample empirical evaluation of them as well.
{"title":"General-purpose ranking and selection for computer simulation","authors":"Soonhui Lee, B. Nelson","doi":"10.1080/0740817X.2015.1125043","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1125043","url":null,"abstract":"ABSTRACT Many indifference-zone Ranking-and-Selection (R&S) procedures have been invented for choosing the best simulated system. To obtain the desired Probability of Correct Selection (PCS), existing procedures exploit knowledge about the particular combination of system performance measure (e.g., mean, probability, variance, quantile) and assumed output distribution (e.g., normal, exponential, Poisson). In this article, we take a step toward general-purpose R&S procedures that work for many types of performance measures and output distributions, including situations where different simulated alternatives have entirely different output distribution families. There are only two versions of our procedure: with and without the use of common random numbers. To obtain the required PCS we exploit intense computation via bootstrapping, and to mitigate the computational effort we create an adaptive sample-allocation scheme that guides the procedure to quickly reach the necessary sample size. We establish the asymptotic PCS of these procedures under very mild conditions and provide a finite-sample empirical evaluation of them as well.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"555 - 564"},"PeriodicalIF":0.0,"publicationDate":"2016-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1125043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-13DOI: 10.1080/0740817X.2015.1125044
Xing Gao, Weijun Zhong
ABSTRACT Information security economics, an emerging and thriving research topic, attempts to address the problems of distorted incentives for stakeholders in an Internet environment, including firms, hackers, the public sector, and other participants, using economic approaches. To alleviate consumer anxiety about the loss of sensitive information, and to further increase consumer demand, firms usually integrate their information security investment strategies to capture market share from competitors and their security information sharing strategies to increase consumer demand across all member firms in industry-based information sharing centers. Using differential game theory, this article investigates dynamic strategies for security investment and information sharing for two competing firms under targeted attacks, in which both firms can influence the value of their information assets through the endogenous determination of pricing rates. We analytically and numerically examine how both security investment rates and information sharing rates are affected by several key parameters in a non-cooperative scenario, including the efficiency of security investment rates, sensitivity parameters for pricing rates, coefficients of consumer demand losses, and the density of targeted attacks. Our results reveal that, confronted with a higher coefficient of consumer demand loss and a higher density of targeted attacks, both firms are reluctant to aggressively defend against hackers and would rather decrease the negative effect of hacker attacks by lowering their pricing rates. Also, we derive feedback equilibrium solutions for the situation where both firms cooperate in security investment, information sharing, or both. It is revealed that although a higher hacker attack density always decreases a firm's integral profits, both firms are not always willing to cooperate in security investment and information sharing. Specifically, the superior firm benefits most when both firms fully cooperate and benefits the least when they behave fully non-cooperatively. However, the inferior firm enjoys the highest integral profit when both firms only cooperate in information sharing and the lowest integral profit in the completely cooperative situation.
{"title":"A differential game approach to security investment and information sharing in a competitive environment","authors":"Xing Gao, Weijun Zhong","doi":"10.1080/0740817X.2015.1125044","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1125044","url":null,"abstract":"ABSTRACT Information security economics, an emerging and thriving research topic, attempts to address the problems of distorted incentives for stakeholders in an Internet environment, including firms, hackers, the public sector, and other participants, using economic approaches. To alleviate consumer anxiety about the loss of sensitive information, and to further increase consumer demand, firms usually integrate their information security investment strategies to capture market share from competitors and their security information sharing strategies to increase consumer demand across all member firms in industry-based information sharing centers. Using differential game theory, this article investigates dynamic strategies for security investment and information sharing for two competing firms under targeted attacks, in which both firms can influence the value of their information assets through the endogenous determination of pricing rates. We analytically and numerically examine how both security investment rates and information sharing rates are affected by several key parameters in a non-cooperative scenario, including the efficiency of security investment rates, sensitivity parameters for pricing rates, coefficients of consumer demand losses, and the density of targeted attacks. Our results reveal that, confronted with a higher coefficient of consumer demand loss and a higher density of targeted attacks, both firms are reluctant to aggressively defend against hackers and would rather decrease the negative effect of hacker attacks by lowering their pricing rates. Also, we derive feedback equilibrium solutions for the situation where both firms cooperate in security investment, information sharing, or both. It is revealed that although a higher hacker attack density always decreases a firm's integral profits, both firms are not always willing to cooperate in security investment and information sharing. Specifically, the superior firm benefits most when both firms fully cooperate and benefits the least when they behave fully non-cooperatively. However, the inferior firm enjoys the highest integral profit when both firms only cooperate in information sharing and the lowest integral profit in the completely cooperative situation.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"511 - 526"},"PeriodicalIF":0.0,"publicationDate":"2016-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1125044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-13DOI: 10.1080/0740817X.2015.1126004
S. Eksioglu, H. Karimi, B. Eksioglu
ABSTRACT Co-firing biomass is a strategy that leads to reduced greenhouse gas emissions in coal-fired power plants. Incentives such as the Production Tax Credit (PTC) are designed to help power plants overcome the financial challenges faced during the implementation phase. Decision makers at power plants face two big challenges. The first challenge is identifying whether the benefits from incentives such as PTC can overcome the costs associated with co-firing. The second challenge is identifying the extent to which a plant should co-fire in order to maximize profits. We present a novel mathematical model that integrates production and transportation decisions at power plants. Such a model enables decision makers to evaluate the impacts of co-firing on the system performance and the cost of generating renewable electricity. The model presented is a nonlinear mixed integer program that captures the loss in process efficiencies due to using biomass, a product that has lower heating value as compared with coal; the additional investment costs necessary to support biomass co-firing as well as savings due to PTC. In order to solve efficiently real-life instances of this problem we present a Lagrangean relaxation model that provides upper bounds and two linear approximations that provide lower bounds for the problem in hand. We use numerical analysis to evaluate the quality of these bounds. We develop a case study using data from nine states located in the southeast region of the United States. Via numerical experiments we observe that (i) incentives such as PTC do facilitate renewable energy production; (ii) the PTC should not be “one size fits all”; instead, tax credits could be a function of plant capacity or the amount of renewable electricity produced; (iii) there is a need for comprehensive tax credit schemes to encourage renewable electricity production and reduce GHG emissions.
{"title":"Optimization models to integrate production and transportation planning for biomass co-firing in coal-fired power plants","authors":"S. Eksioglu, H. Karimi, B. Eksioglu","doi":"10.1080/0740817X.2015.1126004","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1126004","url":null,"abstract":"ABSTRACT Co-firing biomass is a strategy that leads to reduced greenhouse gas emissions in coal-fired power plants. Incentives such as the Production Tax Credit (PTC) are designed to help power plants overcome the financial challenges faced during the implementation phase. Decision makers at power plants face two big challenges. The first challenge is identifying whether the benefits from incentives such as PTC can overcome the costs associated with co-firing. The second challenge is identifying the extent to which a plant should co-fire in order to maximize profits. We present a novel mathematical model that integrates production and transportation decisions at power plants. Such a model enables decision makers to evaluate the impacts of co-firing on the system performance and the cost of generating renewable electricity. The model presented is a nonlinear mixed integer program that captures the loss in process efficiencies due to using biomass, a product that has lower heating value as compared with coal; the additional investment costs necessary to support biomass co-firing as well as savings due to PTC. In order to solve efficiently real-life instances of this problem we present a Lagrangean relaxation model that provides upper bounds and two linear approximations that provide lower bounds for the problem in hand. We use numerical analysis to evaluate the quality of these bounds. We develop a case study using data from nine states located in the southeast region of the United States. Via numerical experiments we observe that (i) incentives such as PTC do facilitate renewable energy production; (ii) the PTC should not be “one size fits all”; instead, tax credits could be a function of plant capacity or the amount of renewable electricity produced; (iii) there is a need for comprehensive tax credit schemes to encourage renewable electricity production and reduce GHG emissions.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"901 - 920"},"PeriodicalIF":0.0,"publicationDate":"2016-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1126004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-08DOI: 10.1080/0740817X.2015.1078016
D. Gupta, Fei Li
ABSTRACT Transit agencies use reserve drivers to cover open work that arises from planned and unplanned time off, equipment breakdowns, weather, and special events. Work assignment decisions must be made sequentially without information about future job requests, a driver’s earlier assignment may not be interrupted to accommodate a new job (no pre-emption), and the scheduler may need to select a particular driver when multiple drivers can perform a job. Motivated by this instance of the interval scheduling problem, we propose a randomized algorithm that carries a performance guarantee relative to the best offline solution and simultaneously performs better than any deterministic algorithm. A key objective of this article is to develop an algorithm that performs well in both average and worst-case scenarios. For this reason, our approach includes discretionary parameters that allow the user to achieve a balance between a myopic approach (accept all jobs that can be scheduled) and a strategic approach (consider accepting only if jobs are longer than a certain threshold). We test our algorithm on data from a large transit agency and show that it performs well relative to the commonly used myopic approach. Although this article is motivated by a transit industry application, the approach we develop is applicable in a whole host of applications involving on-demand-processing of jobs. Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transactions for datasets, additional tables, detailed proofs, etc.
{"title":"Reserve driver scheduling","authors":"D. Gupta, Fei Li","doi":"10.1080/0740817X.2015.1078016","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1078016","url":null,"abstract":"ABSTRACT Transit agencies use reserve drivers to cover open work that arises from planned and unplanned time off, equipment breakdowns, weather, and special events. Work assignment decisions must be made sequentially without information about future job requests, a driver’s earlier assignment may not be interrupted to accommodate a new job (no pre-emption), and the scheduler may need to select a particular driver when multiple drivers can perform a job. Motivated by this instance of the interval scheduling problem, we propose a randomized algorithm that carries a performance guarantee relative to the best offline solution and simultaneously performs better than any deterministic algorithm. A key objective of this article is to develop an algorithm that performs well in both average and worst-case scenarios. For this reason, our approach includes discretionary parameters that allow the user to achieve a balance between a myopic approach (accept all jobs that can be scheduled) and a strategic approach (consider accepting only if jobs are longer than a certain threshold). We test our algorithm on data from a large transit agency and show that it performs well relative to the commonly used myopic approach. Although this article is motivated by a transit industry application, the approach we develop is applicable in a whole host of applications involving on-demand-processing of jobs. Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transactions for datasets, additional tables, detailed proofs, etc.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"193 - 204"},"PeriodicalIF":0.0,"publicationDate":"2016-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1078016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-06DOI: 10.1080/0740817X.2015.1078014
Junbo Son, P. Brennan, Shiyu Zhou
ABSTRACT Asthma is a very common and chronic lung disease that impacts a large portion of population and all ethnic groups. Driven by developments in sensor and mobile communication technology, novel Smart Asthma Management (SAM) systems have been recently established. In SAM systems, patients can create a detailed temporal event log regarding their key health indicators through easy access to a website or their smartphone. Thus, this detailed event log can be obtained inexpensively and aggregated for a large number of patients to form a centralized database for SAM systems. Taking advantage of the data available in SAM systems, we propose an individualized prognostic model based on the unique rescue inhaler usage profile of each individual patient. The model jointly combines two statistical models into a unified prognostic framework. The application of the proposed model to SAM is illustrated in this article and the effectiveness of the method is shown by both a numerical study and a case study that uses real-world data.
{"title":"Rescue inhaler usage prediction in smart asthma management systems using joint mixed effects logistic regression model","authors":"Junbo Son, P. Brennan, Shiyu Zhou","doi":"10.1080/0740817X.2015.1078014","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1078014","url":null,"abstract":"ABSTRACT Asthma is a very common and chronic lung disease that impacts a large portion of population and all ethnic groups. Driven by developments in sensor and mobile communication technology, novel Smart Asthma Management (SAM) systems have been recently established. In SAM systems, patients can create a detailed temporal event log regarding their key health indicators through easy access to a website or their smartphone. Thus, this detailed event log can be obtained inexpensively and aggregated for a large number of patients to form a centralized database for SAM systems. Taking advantage of the data available in SAM systems, we propose an individualized prognostic model based on the unique rescue inhaler usage profile of each individual patient. The model jointly combines two statistical models into a unified prognostic framework. The application of the proposed model to SAM is illustrated in this article and the effectiveness of the method is shown by both a numerical study and a case study that uses real-world data.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"333 - 346"},"PeriodicalIF":0.0,"publicationDate":"2016-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1078014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-02DOI: 10.1080/0740817X.2015.1019164
Li Zeng, Xinwei Deng, Jian Yang
ABSTRACT In tissue-engineered scaffold fabrication, the degradation of scaffolds is a critical issue because it needs to match with the rate of new tissue formation in the human body. However, scaffold degradation is a very complicated process, making degradation regulation a challenging task. To provide a scientific understanding on the degradation of scaffolds, we propose a novel constrained hierarchical model (CHM) for the degradation data. The proposed model has two levels, with the first level characterizing scaffold degradation profiles and the second level characterizing the effect of process parameters on the degradation. Moreover, it can incorporate expert knowledge in the modeling through meaningful constraints, leading to insightful inference on scaffold degradation. Bayesian methods are used for parameter estimation and model comparison. In the case study, the proposed method is illustrated and compared with existing methods using data from a novel tissue-engineered scaffold fabrication process. A numerical study is conducted to examine the effect of sample size on model estimation.
{"title":"Constrained hierarchical modeling of degradation data in tissue-engineered scaffold fabrication","authors":"Li Zeng, Xinwei Deng, Jian Yang","doi":"10.1080/0740817X.2015.1019164","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1019164","url":null,"abstract":"ABSTRACT In tissue-engineered scaffold fabrication, the degradation of scaffolds is a critical issue because it needs to match with the rate of new tissue formation in the human body. However, scaffold degradation is a very complicated process, making degradation regulation a challenging task. To provide a scientific understanding on the degradation of scaffolds, we propose a novel constrained hierarchical model (CHM) for the degradation data. The proposed model has two levels, with the first level characterizing scaffold degradation profiles and the second level characterizing the effect of process parameters on the degradation. Moreover, it can incorporate expert knowledge in the modeling through meaningful constraints, leading to insightful inference on scaffold degradation. Bayesian methods are used for parameter estimation and model comparison. In the case study, the proposed method is illustrated and compared with existing methods using data from a novel tissue-engineered scaffold fabrication process. A numerical study is conducted to examine the effect of sample size on model estimation.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"16 - 33"},"PeriodicalIF":0.0,"publicationDate":"2016-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1019164","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59749763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-02DOI: 10.1080/0740817X.2015.1047069
Nailong Zhang, Qingyu Yang
ABSTRACT The microstructure of a material can strongly influence its properties such as strength, hardness, wear resistance, etc., which in turn play an important role in the quality of products produced from these materials. Existing studies on a material's microstructure have mainly focused on the characteristics of a single microstructure sample and the variation between different microstructure samples is ignored. In this article, we propose a novel random effect autologistic regression model that can be used to characterize the variation in microstructures between different samples for two-phase materials that consist of two distinct parts with different chemical structures. The proposed model differs from the classic autologistic regression model in that we consider the unit-to-unit variability among the microstructure samples, which is characterized by the random effect parameters. To estimate the model parameters given a set of microstructure samples, we first derive a likelihood function, based on which a maximum likelihood estimation method is developed. However, maximizing the likelihood function of the proposed model is generally difficult as it has a complex form. To overcome this challenge, we further develop a stochastic approximation expectation maximization algorithm to estimate the model parameters. A simulation study is conducted to verify the proposed methodology. A real-world example of a dual-phase high strength steel is used to illustrate the developed methods.
{"title":"A random effect autologistic regression model with application to the characterization of multiple microstructure samples","authors":"Nailong Zhang, Qingyu Yang","doi":"10.1080/0740817X.2015.1047069","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1047069","url":null,"abstract":"ABSTRACT The microstructure of a material can strongly influence its properties such as strength, hardness, wear resistance, etc., which in turn play an important role in the quality of products produced from these materials. Existing studies on a material's microstructure have mainly focused on the characteristics of a single microstructure sample and the variation between different microstructure samples is ignored. In this article, we propose a novel random effect autologistic regression model that can be used to characterize the variation in microstructures between different samples for two-phase materials that consist of two distinct parts with different chemical structures. The proposed model differs from the classic autologistic regression model in that we consider the unit-to-unit variability among the microstructure samples, which is characterized by the random effect parameters. To estimate the model parameters given a set of microstructure samples, we first derive a likelihood function, based on which a maximum likelihood estimation method is developed. However, maximizing the likelihood function of the proposed model is generally difficult as it has a complex form. To overcome this challenge, we further develop a stochastic approximation expectation maximization algorithm to estimate the model parameters. A simulation study is conducted to verify the proposed methodology. A real-world example of a dual-phase high strength steel is used to illustrate the developed methods.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"34 - 42"},"PeriodicalIF":0.0,"publicationDate":"2016-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1047069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-02DOI: 10.1080/0740817X.2015.1055391
Xiao Liu, L. Tang
ABSTRACT Line Replaceable Units (LRUs), which can be quickly replaced at a first-level maintenance facility, are widely deployed on capital-intensive systems in order to maintain high system availability. Failed LRU are repaired after replacement and reused as fully serviceable spare units. Demand for spare LRUs depends on factors such as the time-varying installed base, reliability deterioration or growth over maintenance cycles, procurement leadtime of new LRUs, turn-around leadtime of repaired LRUs, etc. In this article, we propose an integrated framework for both reliability analysis and spares provisioning for LRUs with a time-varying installed base. We assume that each system consists of multiple types of LRUs and associated with each type of LRU is a non-stationary sub-failure process. The failure of a system is triggered by sub-failure processes that are statistically dependent. A hierarchical probability model is developed for the demand forecasting of LRUs. Based on the forecasted demand, the optimum inventory level is found through dynamic programming. An application example is presented. A computer program, called the Integrated Platform for Reliability Analysis and Spare Provision, is available that makes the proposed methods readily applicable.
{"title":"Reliability analysis and spares provisioning for repairable systems with dependent failure processes and a time-varying installed base","authors":"Xiao Liu, L. Tang","doi":"10.1080/0740817X.2015.1055391","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1055391","url":null,"abstract":"ABSTRACT Line Replaceable Units (LRUs), which can be quickly replaced at a first-level maintenance facility, are widely deployed on capital-intensive systems in order to maintain high system availability. Failed LRU are repaired after replacement and reused as fully serviceable spare units. Demand for spare LRUs depends on factors such as the time-varying installed base, reliability deterioration or growth over maintenance cycles, procurement leadtime of new LRUs, turn-around leadtime of repaired LRUs, etc. In this article, we propose an integrated framework for both reliability analysis and spares provisioning for LRUs with a time-varying installed base. We assume that each system consists of multiple types of LRUs and associated with each type of LRU is a non-stationary sub-failure process. The failure of a system is triggered by sub-failure processes that are statistically dependent. A hierarchical probability model is developed for the demand forecasting of LRUs. Based on the forecasted demand, the optimum inventory level is found through dynamic programming. An application example is presented. A computer program, called the Integrated Platform for Reliability Analysis and Spare Provision, is available that makes the proposed methods readily applicable.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"43 - 56"},"PeriodicalIF":0.0,"publicationDate":"2016-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1055391","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}