In this paper, we construct an age-structured epidemic model to analyze the optimal vaccine allocation strategy in an epidemic. We focus on two topics: the first one is the optimal vaccination interval between the first and second doses, and the second one is the optimal vaccine allocation ratio between young and elderly people. On the first topic, we show that the optimal interval tends to become longer as the relative efficacy of the first dose to the second dose (RE) increases. On the second topic, we show that the heterogeneity in the age-dependent susceptibility (HS) affects the optimal allocation ratio between young and elderly people, whereas the heterogeneity in the contact frequency among different age groups (HC) tends to affect the effectiveness of the vaccination campaign. A counterfactual simulation suggests that the epidemic wave in the summer of 2021 in Japan could have been greatly mitigated if the optimal vaccine allocation strategy had been taken.
{"title":"Optimal vaccine allocation strategy: Theory and application to the early stage of COVID-19 in Japan.","authors":"Toshikazu Kuniya, Taisuke Nakata, Daisuke Fujii","doi":"10.3934/mbe.2024277","DOIUrl":"https://doi.org/10.3934/mbe.2024277","url":null,"abstract":"<p><p>In this paper, we construct an age-structured epidemic model to analyze the optimal vaccine allocation strategy in an epidemic. We focus on two topics: the first one is the optimal vaccination interval between the first and second doses, and the second one is the optimal vaccine allocation ratio between young and elderly people. On the first topic, we show that the optimal interval tends to become longer as the relative efficacy of the first dose to the second dose (RE) increases. On the second topic, we show that the heterogeneity in the age-dependent susceptibility (HS) affects the optimal allocation ratio between young and elderly people, whereas the heterogeneity in the contact frequency among different age groups (HC) tends to affect the effectiveness of the vaccination campaign. A counterfactual simulation suggests that the epidemic wave in the summer of 2021 in Japan could have been greatly mitigated if the optimal vaccine allocation strategy had been taken.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 6","pages":"6359-6371"},"PeriodicalIF":2.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The decision-making process for computational offloading is a critical aspect of mobile edge computing, and various offloading decision strategies are strongly linked to the calculated latency and energy consumption of the mobile edge computing system. This paper proposes an offloading scheme based on an enhanced sine-cosine optimization algorithm (SCAGA) designed for the "edge-end" architecture scenario within edge computing. The research presented in this paper covers the following aspects: (1) Establishment of computational resource allocation models and computational cost models for edge computing scenarios; (2) Introduction of an enhanced sine and cosine optimization algorithm built upon the principles of Levy flight strategy sine and cosine optimization algorithms, incorporating concepts from roulette wheel selection and gene mutation commonly found in genetic algorithms; (3) Execution of simulation experiments to evaluate the SCAGA-based offloading scheme, demonstrating its ability to effectively reduce system latency and optimize offloading utility. Comparative experiments also highlight improvements in system latency, mobile user energy consumption, and offloading utility when compared to alternative offloading schemes.
{"title":"Research on MEC computing offload strategy for joint optimization of delay and energy consumption.","authors":"Mingchang Ni, Guo Zhang, Qi Yang, Liqiong Yin","doi":"10.3934/mbe.2024276","DOIUrl":"https://doi.org/10.3934/mbe.2024276","url":null,"abstract":"<p><p>The decision-making process for computational offloading is a critical aspect of mobile edge computing, and various offloading decision strategies are strongly linked to the calculated latency and energy consumption of the mobile edge computing system. This paper proposes an offloading scheme based on an enhanced sine-cosine optimization algorithm (SCAGA) designed for the \"edge-end\" architecture scenario within edge computing. The research presented in this paper covers the following aspects: (1) Establishment of computational resource allocation models and computational cost models for edge computing scenarios; (2) Introduction of an enhanced sine and cosine optimization algorithm built upon the principles of Levy flight strategy sine and cosine optimization algorithms, incorporating concepts from roulette wheel selection and gene mutation commonly found in genetic algorithms; (3) Execution of simulation experiments to evaluate the SCAGA-based offloading scheme, demonstrating its ability to effectively reduce system latency and optimize offloading utility. Comparative experiments also highlight improvements in system latency, mobile user energy consumption, and offloading utility when compared to alternative offloading schemes.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 6","pages":"6336-6358"},"PeriodicalIF":2.6,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyperparameter optimization (HPO) has been well-developed and evolved into a well-established research topic over the decades. With the success and wide application of deep learning, HPO has garnered increased attention, particularly within the realm of machine learning model training and inference. The primary objective is to mitigate the challenges associated with manual hyperparameter tuning, which can be ad-hoc, reliant on human expertise, and consequently hinders reproducibility while inflating deployment costs. Recognizing the growing significance of HPO, this paper surveyed classical HPO methods, approaches for accelerating the optimization process, HPO in an online setting (dynamic algorithm configuration, DAC), and when there is more than one objective to optimize (multi-objective HPO). Acceleration strategies were categorized into multi-fidelity, bandit-based, and early stopping; DAC algorithms encompassed gradient-based, population-based, and reinforcement learning-based methods; multi-objective HPO can be approached via scalarization, metaheuristics, and model-based algorithms tailored for multi-objective situation. A tabulated overview of popular frameworks and tools for HPO was provided, catering to the interests of practitioners.
{"title":"Hyperparameter optimization: Classics, acceleration, online, multi-objective, and tools.","authors":"Jia Mian Tan, Haoran Liao, Wei Liu, Changjun Fan, Jincai Huang, Zhong Liu, Junchi Yan","doi":"10.3934/mbe.2024275","DOIUrl":"https://doi.org/10.3934/mbe.2024275","url":null,"abstract":"<p><p>Hyperparameter optimization (HPO) has been well-developed and evolved into a well-established research topic over the decades. With the success and wide application of deep learning, HPO has garnered increased attention, particularly within the realm of machine learning model training and inference. The primary objective is to mitigate the challenges associated with manual hyperparameter tuning, which can be ad-hoc, reliant on human expertise, and consequently hinders reproducibility while inflating deployment costs. Recognizing the growing significance of HPO, this paper surveyed classical HPO methods, approaches for accelerating the optimization process, HPO in an online setting (dynamic algorithm configuration, DAC), and when there is more than one objective to optimize (multi-objective HPO). Acceleration strategies were categorized into multi-fidelity, bandit-based, and early stopping; DAC algorithms encompassed gradient-based, population-based, and reinforcement learning-based methods; multi-objective HPO can be approached via scalarization, metaheuristics, and model-based algorithms tailored for multi-objective situation. A tabulated overview of popular frameworks and tools for HPO was provided, catering to the interests of practitioners.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 6","pages":"6289-6335"},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Models intended to describe the time evolution of a gene network must somehow include transcription, the DNA-templated synthesis of RNA, and translation, the RNA-templated synthesis of proteins. In eukaryotes, the DNA template for transcription can be very long, often consisting of tens of thousands of nucleotides, and lengthy pauses may punctuate this process. Accordingly, transcription can last for many minutes, in some cases hours. There is a long history of introducing delays in gene expression models to take the transcription and translation times into account. Here we study a family of detailed transcription models that includes initiation, elongation, and termination reactions. We establish a framework for computing the distribution of transcription times, and work out these distributions for some typical cases. For elongation, a fixed delay is a good model provided elongation is fast compared to initiation and termination, and there are no sites where long pauses occur. The initiation and termination phases of the model then generate a nontrivial delay distribution, and elongation shifts this distribution by an amount corresponding to the elongation delay. When initiation and termination are relatively fast, the distribution of elongation times can be approximated by a Gaussian. A convolution of this Gaussian with the initiation and termination time distributions gives another analytic approximation to the transcription time distribution. If there are long pauses during elongation, because of the modularity of the family of models considered, the elongation phase can be partitioned into reactions generating a simple delay (elongation through regions where there are no long pauses), and reactions whose distribution of waiting times must be considered explicitly (initiation, termination, and motion through regions where long pauses are likely). In these cases, the distribution of transcription times again involves a nontrivial part and a shift due to fast elongation processes.
旨在描述基因网络时间演化的模型必须以某种方式包括转录(以 DNA 为模板合成 RNA)和翻译(以 RNA 为模板合成蛋白质)。在真核生物中,用于转录的 DNA 模板可能很长,通常由数以万计的核苷酸组成,转录过程中可能会出现长时间的停顿。因此,转录可以持续许多分钟,有时甚至长达数小时。在基因表达模型中引入延迟以考虑转录和翻译时间的做法由来已久。在这里,我们研究了一系列详细的转录模型,其中包括起始、延伸和终止反应。我们建立了一个计算转录时间分布的框架,并在一些典型情况下计算出了这些分布。对于伸长反应,固定延迟是一个很好的模型,条件是伸长反应与起始和终止反应相比速度很快,而且没有出现长时间停顿的位点。然后,模型中的起始和终止阶段会产生一个非对称的延迟分布,而伸长会使这一分布发生移动,移动量与伸长延迟相应。当启动和终止相对较快时,伸长时间的分布可以用高斯分布来近似。将该高斯与起始和终止时间分布卷积,可得到转录时间分布的另一个解析近似值。如果在伸长过程中存在长时间的停顿,由于所考虑的模型系列具有模块性,伸长阶段可以划分为产生简单延迟的反应(通过没有长时间停顿区域的伸长)和必须明确考虑等待时间分布的反应(起始、终止和通过可能出现长时间停顿区域的运动)。在这些情况下,转录时间的分布再次涉及一个非简单的部分,以及由于快速伸长过程而产生的偏移。
{"title":"Analytic delay distributions for a family of gene transcription models.","authors":"S Hossein Hosseini, Marc R Roussel","doi":"10.3934/mbe.2024273","DOIUrl":"https://doi.org/10.3934/mbe.2024273","url":null,"abstract":"<p><p>Models intended to describe the time evolution of a gene network must somehow include transcription, the DNA-templated synthesis of RNA, and translation, the RNA-templated synthesis of proteins. In eukaryotes, the DNA template for transcription can be very long, often consisting of tens of thousands of nucleotides, and lengthy pauses may punctuate this process. Accordingly, transcription can last for many minutes, in some cases hours. There is a long history of introducing delays in gene expression models to take the transcription and translation times into account. Here we study a family of detailed transcription models that includes initiation, elongation, and termination reactions. We establish a framework for computing the distribution of transcription times, and work out these distributions for some typical cases. For elongation, a fixed delay is a good model provided elongation is fast compared to initiation and termination, and there are no sites where long pauses occur. The initiation and termination phases of the model then generate a nontrivial delay distribution, and elongation shifts this distribution by an amount corresponding to the elongation delay. When initiation and termination are relatively fast, the distribution of elongation times can be approximated by a Gaussian. A convolution of this Gaussian with the initiation and termination time distributions gives another analytic approximation to the transcription time distribution. If there are long pauses during elongation, because of the modularity of the family of models considered, the elongation phase can be partitioned into reactions generating a simple delay (elongation through regions where there are no long pauses), and reactions whose distribution of waiting times must be considered explicitly (initiation, termination, and motion through regions where long pauses are likely). In these cases, the distribution of transcription times again involves a nontrivial part and a shift due to fast elongation processes.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 6","pages":"6225-6262"},"PeriodicalIF":2.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses on the feedback global stabilization and observer construction for a sterile insect technique model. The sterile insect technique (SIT) is one of the most ecological methods for controlling insect pests responsible for worldwide crop destruction and disease transmission. In this work, we construct a feedback law that globally asymptotically stabilizes an SIT model at extinction equilibrium. Since the application of this type of control requires the measurement of different states of the target insect population, and, in practice, some states are more difficult or more expensive to measure than others, it is important to know how to construct a state estimator, which from a few well-chosen measured states, estimates the other ones, as the one we build in the second part of our work. In the last part of our work, we show that we can apply the feedback control with estimated states to stabilize the full system.
{"title":"Feedback stabilization and observer design for sterile insect technique models.","authors":"Kala Agbo Bidi","doi":"10.3934/mbe.2024274","DOIUrl":"https://doi.org/10.3934/mbe.2024274","url":null,"abstract":"<p><p>This paper focuses on the feedback global stabilization and observer construction for a sterile insect technique model. The sterile insect technique (SIT) is one of the most ecological methods for controlling insect pests responsible for worldwide crop destruction and disease transmission. In this work, we construct a feedback law that globally asymptotically stabilizes an SIT model at extinction equilibrium. Since the application of this type of control requires the measurement of different states of the target insect population, and, in practice, some states are more difficult or more expensive to measure than others, it is important to know how to construct a state estimator, which from a few well-chosen measured states, estimates the other ones, as the one we build in the second part of our work. In the last part of our work, we show that we can apply the feedback control with estimated states to stabilize the full system.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 6","pages":"6263-6288"},"PeriodicalIF":2.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, deep learning (DL) techniques have achieved remarkable success in various fields of computer vision. This progress was attributed to the vast amounts of data utilized to train these models, as they facilitated the learning of more intricate and detailed feature information about target objects, leading to improved model performance. However, in most real-world tasks, it was challenging to gather sufficient data for model training. Insufficient datasets often resulted in models prone to overfitting. To address this issue and enhance model performance, generalization ability, and mitigate overfitting in data-limited scenarios, image data augmentation methods have been proposed. These methods generated synthetic samples to augment the original dataset, emerging as a preferred strategy to boost model performance when data was scarce. This review first introduced commonly used and highly effective image data augmentation techniques, along with a detailed analysis of their advantages and disadvantages. Second, this review presented several datasets frequently employed for evaluating the performance of image data augmentation methods and examined how advanced augmentation techniques can enhance model performance. Third, this review discussed the applications and performance of data augmentation techniques in various computer vision domains. Finally, this review provided an outlook on potential future research directions for image data augmentation methods.
{"title":"Image data augmentation techniques based on deep learning: A survey.","authors":"Wu Zeng","doi":"10.3934/mbe.2024272","DOIUrl":"https://doi.org/10.3934/mbe.2024272","url":null,"abstract":"<p><p>In recent years, deep learning (DL) techniques have achieved remarkable success in various fields of computer vision. This progress was attributed to the vast amounts of data utilized to train these models, as they facilitated the learning of more intricate and detailed feature information about target objects, leading to improved model performance. However, in most real-world tasks, it was challenging to gather sufficient data for model training. Insufficient datasets often resulted in models prone to overfitting. To address this issue and enhance model performance, generalization ability, and mitigate overfitting in data-limited scenarios, image data augmentation methods have been proposed. These methods generated synthetic samples to augment the original dataset, emerging as a preferred strategy to boost model performance when data was scarce. This review first introduced commonly used and highly effective image data augmentation techniques, along with a detailed analysis of their advantages and disadvantages. Second, this review presented several datasets frequently employed for evaluating the performance of image data augmentation methods and examined how advanced augmentation techniques can enhance model performance. Third, this review discussed the applications and performance of data augmentation techniques in various computer vision domains. Finally, this review provided an outlook on potential future research directions for image data augmentation methods.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 6","pages":"6190-6224"},"PeriodicalIF":2.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many current electronic medical record (EMR) sharing schemes that use proxy re-encryption and blockchain do not fully consider the potential threat of malicious node impersonation attacks. This oversight could lead to data leakage as attackers masquerade as legitimate users or proxy nodes during the sharing process. To deal with this problem, we propose an EMR sharing scheme based on proxy re-encryption and blockchain to protect against impersonation attacks. First, we prevent the potential threat of impersonation attacks by generating a shared temporary key and assigning tasks to multiple proxy nodes. Second, we use a random function to ensure that the selection of encrypted proxy nodes is fair. Third, we use a combination of blockchain and the InterPlanetary File System to solve the problem of insufficient storage capacity of shared processes and ensure the storage security of EMRs. Through the security proof, our scheme guarantees anti-impersonation, anti-collusion, and anti-chosen plaintext attack capability in the sharing process of EMRs. Additionally, experiments on the blockchain platform, namely Chain33, show that our scheme significantly increases efficiency.
{"title":"An anti-impersonation attack electronic health record sharing scheme based on proxy re-encryption and blockchain.","authors":"Jiayuan Zhang, Rongxin Guo, Yifan Shi, Wanting Tang","doi":"10.3934/mbe.2024271","DOIUrl":"https://doi.org/10.3934/mbe.2024271","url":null,"abstract":"<p><p>Many current electronic medical record (EMR) sharing schemes that use proxy re-encryption and blockchain do not fully consider the potential threat of malicious node impersonation attacks. This oversight could lead to data leakage as attackers masquerade as legitimate users or proxy nodes during the sharing process. To deal with this problem, we propose an EMR sharing scheme based on proxy re-encryption and blockchain to protect against impersonation attacks. First, we prevent the potential threat of impersonation attacks by generating a shared temporary key and assigning tasks to multiple proxy nodes. Second, we use a random function to ensure that the selection of encrypted proxy nodes is fair. Third, we use a combination of blockchain and the InterPlanetary File System to solve the problem of insufficient storage capacity of shared processes and ensure the storage security of EMRs. Through the security proof, our scheme guarantees anti-impersonation, anti-collusion, and anti-chosen plaintext attack capability in the sharing process of EMRs. Additionally, experiments on the blockchain platform, namely Chain33, show that our scheme significantly increases efficiency.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 6","pages":"6167-6189"},"PeriodicalIF":2.6,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
COVID-19 is caused by the SARS-CoV-2 virus, which has produced variants and increasing concerns about a potential resurgence since the pandemic outbreak in 2019. Predicting infectious disease outbreaks is crucial for effective prevention and control. This study aims to predict the transmission patterns of COVID-19 using machine learning, such as support vector machine, random forest, and XGBoost, using confirmed cases, death cases, and imported cases, respectively. The study categorizes the transmission trends into the three groups: L0 (decrease), L1 (maintain), and L2 (increase). We develop the risk index function to quantify changes in the transmission trends, which is applied to the classification of machine learning. A high accuracy is achieved when estimating the transmission trends for the confirmed cases (91.5-95.5%), death cases (85.6-91.8%), and imported cases (77.7-89.4%). Notably, the confirmed cases exhibit a higher level of accuracy compared to the data on the deaths and imported cases. L2 predictions outperformed L0 and L1 in all cases. Predicting L2 is important because it can lead to new outbreaks. Thus, this robust L2 prediction is crucial for the timely implementation of control policies for the management of transmission dynamics.
{"title":"Predicting the transmission trends of COVID-19: an interpretable machine learning approach based on daily, death, and imported cases.","authors":"Hyeonjeong Ahn, Hyojung Lee","doi":"10.3934/mbe.2024270","DOIUrl":"https://doi.org/10.3934/mbe.2024270","url":null,"abstract":"<p><p>COVID-19 is caused by the SARS-CoV-2 virus, which has produced variants and increasing concerns about a potential resurgence since the pandemic outbreak in 2019. Predicting infectious disease outbreaks is crucial for effective prevention and control. This study aims to predict the transmission patterns of COVID-19 using machine learning, such as support vector machine, random forest, and XGBoost, using confirmed cases, death cases, and imported cases, respectively. The study categorizes the transmission trends into the three groups: L0 (decrease), L1 (maintain), and L2 (increase). We develop the risk index function to quantify changes in the transmission trends, which is applied to the classification of machine learning. A high accuracy is achieved when estimating the transmission trends for the confirmed cases (91.5-95.5%), death cases (85.6-91.8%), and imported cases (77.7-89.4%). Notably, the confirmed cases exhibit a higher level of accuracy compared to the data on the deaths and imported cases. L2 predictions outperformed L0 and L1 in all cases. Predicting L2 is important because it can lead to new outbreaks. Thus, this robust L2 prediction is crucial for the timely implementation of control policies for the management of transmission dynamics.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 5","pages":"6150-6166"},"PeriodicalIF":2.6,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate an optimal harvesting problem of a spatially explicit fishery model that was previously analyzed. On the surface, this problem looks innocent, but if parameters are set to where a singular arc occurs, two complex questions arise. The first question pertains to Fuller's phenomenon (or chattering), a phenomenon in which the optimal control possesses a singular arc that cannot be concatenated with the bang-bang arcs without prompting infinite oscillations over a finite region. 1) How do we numerically assess whether or not a problem chatters in cases when we cannot analytically prove such a phenomenon? The second question focuses on implementation of an optimal control. 2) When an optimal control has regions that are difficult to implement, how can we find alternative strategies that are both suboptimal and realistic to use? Although the former question does not apply to all optimal harvesting problems, most fishery managers should be concerned about the latter. Interestingly, for this specific problem, our techniques for answering the first question results in an answer to the the second. Our methods involve using an extended version of the switch point algorithm (SPA), which handles control problems having initial and terminal conditions on the states. In our numerical experiments, we obtain strong empirical evidence that the harvesting problem chatters, and we find three alternative harvesting strategies with fewer switches that are realistic to implement and near optimal.
{"title":"The switch point algorithm applied to a harvesting problem.","authors":"Summer Atkins, William W Hager, Maia Martcheva","doi":"10.3934/mbe.2024269","DOIUrl":"https://doi.org/10.3934/mbe.2024269","url":null,"abstract":"<p><p>In this paper, we investigate an optimal harvesting problem of a spatially explicit fishery model that was previously analyzed. On the surface, this problem looks innocent, but if parameters are set to where a singular arc occurs, two complex questions arise. The first question pertains to Fuller's phenomenon (or chattering), a phenomenon in which the optimal control possesses a singular arc that cannot be concatenated with the bang-bang arcs without prompting infinite oscillations over a finite region. 1) How do we numerically assess whether or not a problem chatters in cases when we cannot analytically prove such a phenomenon? The second question focuses on implementation of an optimal control. 2) When an optimal control has regions that are difficult to implement, how can we find alternative strategies that are both suboptimal and realistic to use? Although the former question does not apply to all optimal harvesting problems, most fishery managers should be concerned about the latter. Interestingly, for this specific problem, our techniques for answering the first question results in an answer to the the second. Our methods involve using an extended version of the switch point algorithm (SPA), which handles control problems having initial and terminal conditions on the states. In our numerical experiments, we obtain strong empirical evidence that the harvesting problem chatters, and we find three alternative harvesting strategies with fewer switches that are realistic to implement and near optimal.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 5","pages":"6123-6149"},"PeriodicalIF":2.6,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we investigated the finite-time passivity problem of neutral-type complex-valued neural networks with time-varying delays. On the basis of the Lyapunov functional, Wirtinger-type inequality technique, and linear matrix inequalities (LMIs) approach, new sufficient conditions were derived to ensure the finite-time boundedness (FTB) and finite-time passivity (FTP) of the concerned network model. At last, two numerical examples with simulations were presented to demonstrate the validity of our criteria.
{"title":"Finite-time passivity of neutral-type complex-valued neural networks with time-varying delays.","authors":"Haydar Akca, Chaouki Aouiti, Farid Touati, Changjin Xu","doi":"10.3934/mbe.2024268","DOIUrl":"https://doi.org/10.3934/mbe.2024268","url":null,"abstract":"<p><p>In this work, we investigated the finite-time passivity problem of neutral-type complex-valued neural networks with time-varying delays. On the basis of the Lyapunov functional, Wirtinger-type inequality technique, and linear matrix inequalities (LMIs) approach, new sufficient conditions were derived to ensure the finite-time boundedness (FTB) and finite-time passivity (FTP) of the concerned network model. At last, two numerical examples with simulations were presented to demonstrate the validity of our criteria.</p>","PeriodicalId":49870,"journal":{"name":"Mathematical Biosciences and Engineering","volume":"21 5","pages":"6097-6122"},"PeriodicalIF":2.6,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}