Pub Date : 2024-10-04DOI: 10.1177/00236772241262119
Maya J Bodnar, I Joanna Makowska, Courtney T Boyd, Catherine A Schuppli, Daniel M Weary
Isoflurane anesthesia prior to carbon dioxide euthanasia is recognized as a refinement by many guidelines. Facilities lacking access to a vaporizer can use the "drop" method, whereby liquid anesthetic is introduced into an induction chamber. Knowing the least aversive concentration of isoflurane is critical. Previous work has demonstrated that isoflurane administered with the drop method at a concentration of 5% is aversive to mice. Other work has shown that lower concentrations (1.7% to 3.7%) of isoflurane can be used to anesthetize mice with the drop method, but aversion to these concentrations has not been tested. We assessed aversion to these lower isoflurane concentrations administered with the drop method, using a conditioned place aversion (CPA) paradigm. Female C57BL/6J (OT-1) mice (n = 28) were randomly allocated to one of three isoflurane concentrations: 1.7%, 2.7%, and 3.7%. Mice were acclimated to a light-dark apparatus. Prior to and following dark (+ isoflurane) and light chamber conditioning sessions, mice underwent an initial and final preference assessment; the change in the duration spent within the dark chamber between the initial and final preference tests was used to calculate a CPA score. Aversion increased with increasing isoflurane concentration: from 1.7% to 2.7% to 3.7% isoflurane, mean ± SE CPA score decreased from 19.6 ± 20.1 s to -25.6 ± 23.2 s, to -116.9 ± 30.6 s (F1,54 = 15.4, p < 0.001). Our results suggest that, when using the drop method to administer isoflurane, concentrations between 1.7% and 2.7% can be used to minimize female mouse aversion to induction.
{"title":"Mouse aversion to induction with isoflurane using the drop method.","authors":"Maya J Bodnar, I Joanna Makowska, Courtney T Boyd, Catherine A Schuppli, Daniel M Weary","doi":"10.1177/00236772241262119","DOIUrl":"https://doi.org/10.1177/00236772241262119","url":null,"abstract":"<p><p>Isoflurane anesthesia prior to carbon dioxide euthanasia is recognized as a refinement by many guidelines. Facilities lacking access to a vaporizer can use the \"drop\" method, whereby liquid anesthetic is introduced into an induction chamber. Knowing the least aversive concentration of isoflurane is critical. Previous work has demonstrated that isoflurane administered with the drop method at a concentration of 5% is aversive to mice. Other work has shown that lower concentrations (1.7% to 3.7%) of isoflurane can be used to anesthetize mice with the drop method, but aversion to these concentrations has not been tested. We assessed aversion to these lower isoflurane concentrations administered with the drop method, using a conditioned place aversion (CPA) paradigm. Female C57BL/6J (OT-1) mice (<i>n</i> = 28) were randomly allocated to one of three isoflurane concentrations: 1.7%, 2.7%, and 3.7%. Mice were acclimated to a light-dark apparatus. Prior to and following dark (+ isoflurane) and light chamber conditioning sessions, mice underwent an initial and final preference assessment; the change in the duration spent within the dark chamber between the initial and final preference tests was used to calculate a CPA score. Aversion increased with increasing isoflurane concentration: from 1.7% to 2.7% to 3.7% isoflurane, mean ± SE CPA score decreased from 19.6 ± 20.1 s to -25.6 ± 23.2 s, to -116.9 ± 30.6 s (<i>F</i><sub>1,54</sub> = 15.4, <i>p</i> < 0.001). Our results suggest that, when using the drop method to administer isoflurane, concentrations between 1.7% and 2.7% can be used to minimize female mouse aversion to induction.</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":" ","pages":"236772241262119"},"PeriodicalIF":1.3,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142372228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-09-24DOI: 10.1177/00236772241276785
Reid D Landes
Cage effects: some researchers worry about them, some don't, and some aren't even aware of them. When statistical analyses do not account for cage effects, there is real reason to worry. Regardless of researchers' worries or lack thereof, all researchers should be aware of how cage effects can affect the results. The "how" depends, in part, on the experimental design. Here, I (a) define cage effects; (b) illustrate a completely randomized design (CRD) often used in animal experiments; (c) explain how statistical significance is artificially inflated when cage effects are ignored and (d) give guidance on proper analyses and on how to increase statistical power in CRDs.
{"title":"How cage effects can hurt statistical analyses of completely randomized designs.","authors":"Reid D Landes","doi":"10.1177/00236772241276785","DOIUrl":"10.1177/00236772241276785","url":null,"abstract":"<p><p>Cage effects: some researchers worry about them, some don't, and some aren't even aware of them. When statistical analyses do <i>not</i> account for cage effects, there is real reason to worry. Regardless of researchers' worries or lack thereof, all researchers should be aware of <i>how</i> cage effects can affect the results. The \"how\" depends, in part, on the experimental design. Here, I (a) define cage effects; (b) illustrate a completely randomized design (CRD) often used in animal experiments; (c) explain how statistical significance is artificially inflated when cage effects are ignored and (d) give guidance on proper analyses and on how to increase statistical power in CRDs.</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":" ","pages":"476-480"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1177/00236772241272991
P S Verhave, R van Eenige, Iacw Tiebosch
Blinding and randomisation are important methods for increasing the robustness of pre-clinical studies, as incomplete or improper implementation thereof is recognised as a source of bias. Randomisation ensures that any known and unknown covariates introducing bias are randomly distributed over the experimental groups. Thereby, differences between the experimental groups that might otherwise have contributed to false positive or -negative results are diminished. Methods for randomisation range from simple randomisation (e.g. rolling a dice) to advanced randomisation strategies involving the use of specialised software. Blinding on the other hand ensures that researchers are unaware of group allocation during the preparation, execution and acquisition and/or the analysis of the data. This minimises the risk of unintentional influences resulting in bias. Methods for blinding require strong protocols and a team approach. In this review, we outline methods for randomisation and blinding and give practical tips on how to implement them, with a focus on animal studies.
{"title":"Methods for applying blinding and randomisation in animal experiments.","authors":"P S Verhave, R van Eenige, Iacw Tiebosch","doi":"10.1177/00236772241272991","DOIUrl":"https://doi.org/10.1177/00236772241272991","url":null,"abstract":"<p><p>Blinding and randomisation are important methods for increasing the robustness of pre-clinical studies, as incomplete or improper implementation thereof is recognised as a source of bias. Randomisation ensures that any known and unknown covariates introducing bias are randomly distributed over the experimental groups. Thereby, differences between the experimental groups that might otherwise have contributed to false positive or -negative results are diminished. Methods for randomisation range from simple randomisation (e.g. rolling a dice) to advanced randomisation strategies involving the use of specialised software. Blinding on the other hand ensures that researchers are unaware of group allocation during the preparation, execution and acquisition and/or the analysis of the data. This minimises the risk of unintentional influences resulting in bias. Methods for blinding require strong protocols and a team approach. In this review, we outline methods for randomisation and blinding and give practical tips on how to implement them, with a focus on animal studies.</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":"58 5","pages":"419-426"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142372230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1177/00236772241276806
Penny S Reynolds
Statistically based experimental designs have been available for over a century. However, many preclinical researchers are completely unaware of these methods, and the success of experiments is usually equated only with 'p < 0.05'. By contrast, a well-thought-out experimental design strategy provides data with evidentiary and scientific value. A value-based strategy requires implementation of statistical design principles coupled with basic project management techniques. This article outlines the three phases of a value-based design strategy: proper framing of the research question, statistically based operationalisation through careful selection and structuring of appropriate inputs, and incorporation of methods that minimise bias and process variation. Appropriate study design increases study validity and the evidentiary strength of the results, reduces animal numbers, and reduces waste from noninformative experiments. Statistically based experimental design is thus a key component of the 'Reduction' pillar of the 3R (Replacement, Reduction, Refinement) principles for ethical animal research.
{"title":"Study design: think 'scientific value' not '<i>p</i>-values'.","authors":"Penny S Reynolds","doi":"10.1177/00236772241276806","DOIUrl":"https://doi.org/10.1177/00236772241276806","url":null,"abstract":"<p><p>Statistically based experimental designs have been available for over a century. However, many preclinical researchers are completely unaware of these methods, and the success of experiments is usually equated only with '<i>p</i> < 0.05'. By contrast, a well-thought-out experimental design strategy provides data with evidentiary and scientific value. A value-based strategy requires implementation of statistical design principles coupled with basic project management techniques. This article outlines the three phases of a value-based design strategy: proper framing of the research question, statistically based operationalisation through careful selection and structuring of appropriate inputs, and incorporation of methods that minimise bias and process variation. Appropriate study design increases study validity and the evidentiary strength of the results, reduces animal numbers, and reduces waste from noninformative experiments. Statistically based experimental design is thus a key component of the 'Reduction' pillar of the 3R (Replacement, Reduction, Refinement) principles for ethical animal research.</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":"58 5","pages":"404-410"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142372232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-09-05DOI: 10.1177/00236772241247105
Naomi Altman, Martin Krzywinski
Variability is inherent in most biological systems due to differences among members of the population. Two types of variation are commonly observed in studies: differences among samples and the "error" in estimating a population parameter (e.g. mean) from a sample. While these concepts are fundamentally very different, the associated variation is often expressed using similar notation-an interval that represents a range of values with a lower and upper bound. In this article we discuss how common intervals are used (and misused).
{"title":"Depicting variability and uncertainty using intervals and error bars.","authors":"Naomi Altman, Martin Krzywinski","doi":"10.1177/00236772241247105","DOIUrl":"10.1177/00236772241247105","url":null,"abstract":"<p><p>Variability is inherent in most biological systems due to differences among members of the population. Two types of variation are commonly observed in studies: differences among samples and the \"error\" in estimating a population parameter (e.g. mean) from a sample. While these concepts are fundamentally very different, the associated variation is often expressed using similar notation-an interval that represents a range of values with a lower and upper bound. In this article we discuss how common intervals are used (and misused).</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":" ","pages":"453-457"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142140406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-08-19DOI: 10.1177/00236772241248509
Fulvio Magara, Benjamin Boury-Jamot
Absence of statistical significance (i.e., p > 0.05) in the results of a frequentist test comparing two samples is often used as evidence of absence of difference, or absence of effect of a treatment, on the measured variable. Such conclusions are often wrong because absence of significance may merely result from a sample size that is too small to reveal an effect. To conclude that there is no meaningful effect of a treatment/condition, it is necessary to use an appropriate statistical approach. For frequentist statistics, a simple tool for this goal is the 'two one-sided t-test,' a form of equivalence test that relies on the a priori definition of a minimal difference considered to be relevant. In other words, the smallest effect size of interest should be established in advance. We present the principles of this test and give examples where it allows correct interpretation of the results of a classical t-test assuming absence of difference. Equivalence tests are also very useful in probing whether certain significant results are also biologically meaningful, because when comparing large samples it is possible to find significant results in both an equivalence test and in a two-sample t-test, assuming no difference as the null hypothesis.
在比较两个样本的频数检验结果中,如果没有统计学意义(即 p > 0.05),通常会被用来证明在测量的变量上没有差异,或治疗没有效果。这种结论往往是错误的,因为没有显著性可能只是因为样本量太小,无法显示效果。要得出某项治疗/条件不存在有意义影响的结论,必须使用适当的统计方法。对于频数统计来说,实现这一目标的简单工具是 "两个单侧 t 检验",这是一种等效检验,它依赖于被认为相关的最小差异的先验定义。换句话说,应事先确定相关的最小效应大小。我们介绍了这种检验的原理,并举例说明了在假设无差异的情况下,它可以正确解释经典 t 检验的结果。等效检验在探究某些显著结果是否也具有生物学意义方面也非常有用,因为在比较大样本时,假设无差异为零假设,在等效检验和双样本 t 检验中都有可能发现显著结果。
{"title":"About statistical significance, and the lack thereof.","authors":"Fulvio Magara, Benjamin Boury-Jamot","doi":"10.1177/00236772241248509","DOIUrl":"10.1177/00236772241248509","url":null,"abstract":"<p><p>Absence of statistical significance (i.e., <i>p</i> > 0.05) in the results of a frequentist test comparing two samples is often used as evidence of absence of difference, or absence of effect of a treatment, on the measured variable. Such conclusions are often wrong because absence of significance may merely result from a sample size that is too small to reveal an effect. To conclude that there is no meaningful effect of a treatment/condition, it is necessary to use an appropriate statistical approach. For frequentist statistics, a simple tool for this goal is the 'two one-sided <i>t</i>-test,' a form of equivalence test that relies on the a priori definition of a minimal difference considered to be relevant. In other words, the smallest effect size of interest should be established in advance. We present the principles of this test and give examples where it allows correct interpretation of the results of a classical <i>t</i>-test assuming absence of difference. Equivalence tests are also very useful in probing whether certain significant results are also biologically meaningful, because when comparing large samples it is possible to find significant results in both an equivalence test and in a two-sample <i>t</i>-test, assuming no difference as the null hypothesis.</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":" ","pages":"448-452"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142000309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1177/00236772241276808
Romain-Daniel Gosselin
The normality assumption postulates that empirical data derives from a normal (Gaussian) population. It is a pillar of inferential statistics that enables the theorization of probability functions and the computation of p-values thereof. The breach of this assumption may not impose a formal mathematical constraint on the computation of inferential outputs (e.g., p-values) but may make them inoperable and possibly lead to unethical waste of laboratory animals. Various methods, including statistical tests and qualitative visual examination, can reveal incompatibility with normality and the choice of a procedure should not be trivialized. The following minireview will provide a brief overview of diagrammatical methods and statistical tests commonly employed to evaluate congruence with normality. Special attention will be given to the potential pitfalls associated with their application. Normality is an unachievable ideal that practically never accurately describes natural variables, and detrimental consequences of non-normality may be safeguarded by using large samples. Therefore, the very concept of preliminary normality testing is also, arguably provocatively, questioned.
正态性假设假定经验数据来自正常(高斯)群体。它是推理统计的支柱,使概率函数的理论化及其 p 值的计算成为可能。违反这一假设可能不会对推理输出(如 p 值)的计算造成正式的数学限制,但可能会使其无法操作,并可能导致实验动物的不道德浪费。各种方法,包括统计检验和定性直观检查,都可以发现不符合正态性的情况,因此不应轻视程序的选择。下面的小视图将简要介绍常用于评估是否符合正态性的图解法和统计检验法。我们将特别关注与这些方法的应用相关的潜在隐患。正态性是一个无法实现的理想,实际上永远无法准确描述自然变量,而使用大样本可以避免非正态性的有害后果。因此,初步正态性检验的概念本身也受到了质疑,可以说是挑衅性的质疑。
{"title":"Testing for normality: a user's (cautionary) guide.","authors":"Romain-Daniel Gosselin","doi":"10.1177/00236772241276808","DOIUrl":"https://doi.org/10.1177/00236772241276808","url":null,"abstract":"<p><p>The normality assumption postulates that empirical data derives from a normal (Gaussian) population. It is a pillar of inferential statistics that enables the theorization of probability functions and the computation of p-values thereof. The breach of this assumption may not impose a formal mathematical constraint on the computation of inferential outputs (e.g., p-values) but may make them inoperable and possibly lead to unethical waste of laboratory animals. Various methods, including statistical tests and qualitative visual examination, can reveal incompatibility with normality and the choice of a procedure should not be trivialized. The following minireview will provide a brief overview of diagrammatical methods and statistical tests commonly employed to evaluate congruence with normality. Special attention will be given to the potential pitfalls associated with their application. Normality is an unachievable ideal that practically never accurately describes natural variables, and detrimental consequences of non-normality may be safeguarded by using large samples. Therefore, the very concept of preliminary normality testing is also, arguably provocatively, questioned.</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":"58 5","pages":"433-437"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142372233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-08-19DOI: 10.1177/00236772241246602
Stanley E Lazic
Most classical statistical tests assume data are normally distributed. If this assumption is not met, researchers often turn to non-parametric methods. These methods have some drawbacks, and if no suitable non-parametric test exists, a normal distribution may be used inappropriately instead. A better option is to select a distribution appropriate for the data from dozens available in modern software packages. Selecting a distribution that represents the data generating process is a crucial but overlooked step in analysing data. This paper discusses several alternative distributions and the types of data that they are suitable for.
{"title":"Ditching the norm: Using alternative distributions for biological data analysis.","authors":"Stanley E Lazic","doi":"10.1177/00236772241246602","DOIUrl":"10.1177/00236772241246602","url":null,"abstract":"<p><p>Most classical statistical tests assume data are normally distributed. If this assumption is not met, researchers often turn to non-parametric methods. These methods have some drawbacks, and if no suitable non-parametric test exists, a normal distribution may be used inappropriately instead. A better option is to select a distribution appropriate for the data from dozens available in modern software packages. Selecting a distribution that represents the data generating process is a crucial but overlooked step in analysing data. This paper discusses several alternative distributions and the types of data that they are suitable for.</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":" ","pages":"438-442"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142000311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-09-24DOI: 10.1177/00236772241247106
Naomi Altman, Martin Krzywinski
P-values combined with estimates of effect size are used to assess the importance of experimental results. However, their interpretation can be invalidated by selection bias when testing multiple hypotheses, fitting multiple models or even informally selecting results that seem interesting after observing the data. We offer an introduction to principled uses of p-values (targeted at the non-specialist) and identify questionable practices to be avoided.
P 值与效应大小估计值相结合,用于评估实验结果的重要性。然而,在测试多个假设、拟合多个模型或甚至在观察数据后非正式地选择看起来有趣的结果时,其解释可能会因选择偏差而失效。我们将介绍 p 值的原则性用法(针对非专业人士),并指出应避免的可疑做法。
{"title":"Understanding <i>p</i>-values and significance.","authors":"Naomi Altman, Martin Krzywinski","doi":"10.1177/00236772241247106","DOIUrl":"10.1177/00236772241247106","url":null,"abstract":"<p><p><i>P-</i>values combined with estimates of effect size are used to assess the importance of experimental results. However, their interpretation can be invalidated by selection bias when testing multiple hypotheses, fitting multiple models or even informally selecting results that seem interesting after observing the data. We offer an introduction to principled uses of <i>p</i>-values (targeted at the non-specialist) and identify questionable practices to be avoided.</p>","PeriodicalId":18013,"journal":{"name":"Laboratory Animals","volume":" ","pages":"443-447"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}