Background: The present study aims to examine whether using Dexmedetomidine as a premedication can reduce heart rate (HR) and peak HR during modified electroconvulsive therapy (ECT). It is known that the acute hemodynamic stress induced by ECT may elevate the risk of cardiovascular complications in psychiatric patients. Previous research has suggested that β-blockers and α-2 adrenergic agonists effectively alleviate the hyperdynamic responses to ECT. Therefore, the current study seeks to determine whether Dexmedetomidine can offer similar benefits in regulating HR fluctuations during the modified ECT procedure. Materials and Methods: In this prospective, double-blinded, randomized controlled study, a total of 60 psychiatric patients aged between 18 and 50 years, categorized as per the American Society of Anaesthesiologists score I and II, and scheduled for ECT, were included. These patients were randomly divided into two groups: Group D, which received 50 mL of normal saline (NS) with 1 µg/kg of Dexmedetomidine, and Group C, which received 50 mL of NS only. HR measurements were taken every 15 s for 5 min following the modified ECT, and any changes in peak HR were carefully recorded and analyzed. Results: The mean age (years) and weight (kg) in groups C and D were 29.5 ± 7.82 and 32.5 ± 8.37, 59.4 ± 5.33 and 58.6 ± 4.57, respectively. Both groups did not differ significantly concerning age (P = 0.157) and weight (P = 0.519). Statistically, no significant difference in mean HR (baseline, before ECT, and peak HR following ECT within 5 min) was observed between study groups. In group D, the rise in HR was significantly less when compared to group C (P = 0.001). The groups had a significant (P = 0.001) difference in HR before ECT. Conclusions: The administration of Dexmedetomidine at a dose of 1 µg/kg as premedication resulted in a notable decrease in HR and peak HR responses during the modified ECT.
{"title":"Effectiveness of Dexmedetomidine as premedication to modify the heart rate response to modified electroconvulsive therapy: a randomized controlled trial","authors":"Manjunath Shivapujimath, Nikhita Kalyanshetti, Raghavendra Kalal","doi":"10.4103/mgmj.mgmj_165_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_165_23","url":null,"abstract":"Background: The present study aims to examine whether using Dexmedetomidine as a premedication can reduce heart rate (HR) and peak HR during modified electroconvulsive therapy (ECT). It is known that the acute hemodynamic stress induced by ECT may elevate the risk of cardiovascular complications in psychiatric patients. Previous research has suggested that β-blockers and α-2 adrenergic agonists effectively alleviate the hyperdynamic responses to ECT. Therefore, the current study seeks to determine whether Dexmedetomidine can offer similar benefits in regulating HR fluctuations during the modified ECT procedure. Materials and Methods: In this prospective, double-blinded, randomized controlled study, a total of 60 psychiatric patients aged between 18 and 50 years, categorized as per the American Society of Anaesthesiologists score I and II, and scheduled for ECT, were included. These patients were randomly divided into two groups: Group D, which received 50 mL of normal saline (NS) with 1 µg/kg of Dexmedetomidine, and Group C, which received 50 mL of NS only. HR measurements were taken every 15 s for 5 min following the modified ECT, and any changes in peak HR were carefully recorded and analyzed. Results: The mean age (years) and weight (kg) in groups C and D were 29.5 ± 7.82 and 32.5 ± 8.37, 59.4 ± 5.33 and 58.6 ± 4.57, respectively. Both groups did not differ significantly concerning age (P = 0.157) and weight (P = 0.519). Statistically, no significant difference in mean HR (baseline, before ECT, and peak HR following ECT within 5 min) was observed between study groups. In group D, the rise in HR was significantly less when compared to group C (P = 0.001). The groups had a significant (P = 0.001) difference in HR before ECT. Conclusions: The administration of Dexmedetomidine at a dose of 1 µg/kg as premedication resulted in a notable decrease in HR and peak HR responses during the modified ECT.","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136366659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Globally, information and communication technology (ICT) has been recognized as a pivotal strategy for embracing the evolving healthcare landscape. Despite its substantial role in facilitating information sharing, its adoption remains notably constrained in most developing nations. Objective: This study investigated the knowledge, accessibility, and usage of ICT resources among nurses in secondary healthcare establishments within Ondo State, Nigeria. Materials and Methods: The research employed a survey approach involving the participation of 200 nurses. A structured instrument was created to gather data, ensuring its validity and internal consistency. Both descriptive and inferential analysis of data was done. Hypothesis testing utilized Pearson correlation and Chi-square tests. Results: Most nurses, comprising 127 individuals (61.5%), demonstrated a commendable understanding of ICT within secondary healthcare establishments in Ondo State. Half of the participants (108, 54%) possessed ICT equipment in their respective units, whereas other essential ICT infrastructure was notably absent. Chi-square tests revealed an association between nurses’ age and their level of ICT knowledge (P = 0.10). This study discerned a direct connection between knowledge and utilization, denoted by a strong correlation coefficient of 0.738. Notably, gender substantially correlated with nurses’ ICT proficiency as evidenced by a significant P value of 0.459. Conclusion: The research indicates that rectifying the observed disparities can be achieved by ensuring an adequate supply of ICT resources and offering ongoing training sessions for practicing nurses. This approach will improve healthcare outcomes in secondary healthcare institutions in Ondo State.
{"title":"Information and communication technology: Examining knowledge, availability, and utilization among nurses in secondary health care facilities in Ondo State, Nigeria","authors":"ModupeJokotola Oye, JanetAdebukola Adeniran, OlayinkaSenami Jonathan-Adebiyi","doi":"10.4103/mgmj.mgmj_91_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_91_23","url":null,"abstract":"Background: Globally, information and communication technology (ICT) has been recognized as a pivotal strategy for embracing the evolving healthcare landscape. Despite its substantial role in facilitating information sharing, its adoption remains notably constrained in most developing nations. Objective: This study investigated the knowledge, accessibility, and usage of ICT resources among nurses in secondary healthcare establishments within Ondo State, Nigeria. Materials and Methods: The research employed a survey approach involving the participation of 200 nurses. A structured instrument was created to gather data, ensuring its validity and internal consistency. Both descriptive and inferential analysis of data was done. Hypothesis testing utilized Pearson correlation and Chi-square tests. Results: Most nurses, comprising 127 individuals (61.5%), demonstrated a commendable understanding of ICT within secondary healthcare establishments in Ondo State. Half of the participants (108, 54%) possessed ICT equipment in their respective units, whereas other essential ICT infrastructure was notably absent. Chi-square tests revealed an association between nurses’ age and their level of ICT knowledge (P = 0.10). This study discerned a direct connection between knowledge and utilization, denoted by a strong correlation coefficient of 0.738. Notably, gender substantially correlated with nurses’ ICT proficiency as evidenced by a significant P value of 0.459. Conclusion: The research indicates that rectifying the observed disparities can be achieved by ensuring an adequate supply of ICT resources and offering ongoing training sessions for practicing nurses. This approach will improve healthcare outcomes in secondary healthcare institutions in Ondo State.","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136367870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4103/mgmj.mgmj_196_23
Sushil Kumar, Pradnya Dongargaonkar
The renowned author Aldous Huxley captured global interest in the potential of laboratory-born babies rather than traditional childbirth with his iconic 1932 science fiction novel “Brave New World.”[1] While some initial progress has been made in this direction, his projections regarding human fertility largely remain within the realm of speculative fiction, eagerly awaiting the eventual realization. The idea of being able to overcome barriers in procreation has roots deep-seated back to 1890—when a British zoologist Walter Heape showed that it was possible to transfer embryos when he put Angora-fertilized eggs into a Belgian Hare doe rabbit, which gave birth to Angora offspring metaphorically an interspecies surrogacy. It was far later than this that the first birth of an in vitro fertilization (IVF) baby was witnessed in 1978. Robert Edwards and Patrick Steptoe published this report in The Lancet titled “Birth after Reimplantation of a Human Embryo.”[2] Since its clinical introduction, IVF has redefined the ability of the human species to procreate. From baby Louis to now 30 years later, about 3 million babies have been born with IVF. Even though IVF benefits infertile couples, about 10% of all the beneficiaries are not restricted to just them. Clinical indications for IVF have rapidly expanded to include medical and genetic conditions and fertility preservation.[3] An additional driver of IVF utilization is the growing societal acceptance of nontraditional families, including single and same-sex parents, and social media that has opened our minds to think beyond the conventional.[4] The Career driven society has the option of oocyte freezing available, which later form healthy embryos defying age bars and has revolutionized the modern concept of fertility assistance. IVF involves a series of steps, including controlled ovarian hyperstimulation, oocyte retrieval, fertilization, embryo culture, embryo selection, and embryo transfer. A significant limitation of this technique is the inability to enhance the quality of obtained oocytes or sperm. In response to this challenge, efforts have been directed toward augmenting the number of collected eggs or sperm.[5] Despite using expert and stringent morphological criteria to choose embryos, only 52.3% of 2.3 transferred embryos typically result in a live birth.[6] This creates a significant 48% margin of uncertainty that proves challenging to surmount, mainly attributed to the absence of techniques for enhancing gamete quality. Furthermore, the uncertain implantation outcome necessitates transferring more embryos to counterbalance this unpredictability. Using multiple embryos during transfer contributes to a significant rate of multiple births, reaching about 30% in patients undergoing assisted reproductive techniques (ARTs). This circumstance adds to the associated perinatal morbidity, resulting in implications such as preterm birth, prolonged stay in the neonatal intensive care unit (NICU), heighten
著名作家奥尔德斯·赫胥黎(Aldous Huxley)在1932年的标志性科幻小说《美丽新世界》(Brave New World)中吸引了全球对实验室出生婴儿的潜力的兴趣,而不是传统的分娩方式。[1]虽然在这个方向上取得了一些初步进展,但他对人类生育能力的预测在很大程度上仍停留在投机小说的范围内,热切地等待着最终的实现。能够克服繁殖障碍的想法可以追溯到1890年,当时英国动物学家沃尔特·希普(Walter Heape)将安哥拉的受精卵注入一只比利时野兔(Belgian Hare)的兔子体内,证明胚胎移植是可能的,这只兔子生下了安哥拉的后代,这相当于一种物种间的代孕。比这晚得多的是,1978年,第一个试管婴儿诞生了。罗伯特·爱德华兹和帕特里克·斯特普托在《柳叶刀》杂志上发表了题为《人类胚胎移植后的出生》的报告。[2]自临床应用以来,体外受精重新定义了人类的生殖能力。从婴儿路易斯到30年后的今天,大约有300万婴儿通过体外受精出生。尽管体外受精使不育夫妇受益,但大约10%的受益者并不仅限于他们。体外受精的临床适应症已迅速扩大到包括医学和遗传条件以及生育能力保存。[3]试管婴儿应用的另一个驱动因素是社会对非传统家庭的接受程度越来越高,包括单身和同性父母,以及社交媒体打开了我们的思维,让我们超越传统。[4]事业驱动的社会有卵子冷冻的选择,这后来形成了健康的胚胎,打破了年龄限制,并彻底改变了现代生育援助的概念。体外受精包括一系列步骤,包括控制卵巢过度刺激、卵母细胞回收、受精、胚胎培养、胚胎选择和胚胎移植。该技术的一个显著限制是不能提高获得的卵母细胞或精子的质量。为了应对这一挑战,人们一直在努力增加收集到的卵子或精子的数量。[5]尽管使用专家和严格的形态学标准来选择胚胎,但2.3个移植胚胎中只有52.3%通常导致活产。[6]这就产生了48%的不确定性,这是很难克服的,主要是由于缺乏提高配子质量的技术。此外,不确定的植入结果需要移植更多的胚胎来抵消这种不可预测性。在移植过程中使用多个胚胎有助于提高多胎率,在接受辅助生殖技术(ARTs)的患者中,这一比例约为30%。这种情况增加了相关的围产期发病率,导致早产、在新生儿重症监护病房(NICU)停留时间延长、易受感染和肺部发育受损等后果。体外受精包括控制卵巢过度刺激、取卵、受精、胚胎培养、选择和移植。这种方法的主要局限性在于我们无法提高获得的卵母细胞或精子的质量。然而,这种限制可以通过增加提取卵子或精子的数量来抵消。[5]尽管使用了专家和严格的胚胎选择形态学标准,但2.3个移植胚胎中只有52.3%产生活产。[6]鉴于目前缺乏提高配子质量的手段,这留下了48%的巨大差距,这是一个挑战。加上这一挑战,围绕着床成功的不确定性需要移植更多的胚胎来解决这种不可预测性。它在移植过程中使用多个胚胎,导致多胎率显著提高,在抗逆转录病毒治疗患者中达到约30%。这导致了与手术相关的发病率,导致显著的围产期影响,如早产、在新生儿重症监护室的停留时间延长、感染易感性和肺部发育受损。胚胎的选择依赖于形态特征及其在培养中的发育过程。[7]有利的选择标准包括卵裂球数量、有无多核、早期分裂到双细胞阶段以及胚胎中细胞碎片的最小比例等因素。囊胚腔的扩张状态、内细胞群和滋养外胚层细胞的内聚和计数是决定着床率和妊娠率的关键因素。这种由胚胎学家和临床生育专家进行的传统评估即将被基于人工智能(AI)的顺序胚胎评估模型所取代,该模型与精确形态胚胎选择的计算机算法相结合。 [8-10]此外,仅仅依靠胚胎的形态评估会带来错误的空间。为了解决这一挑战,利用质子核磁共振(1H NMR)将代谢组学分析纳入胚胎培养基中。代谢组学分析显示与胚胎的生殖潜能相关。质子核磁共振谱分析显示,胚胎培养基中丙氨酸、丙酮酸和葡萄糖水平降低,导致妊娠成功。与未着床的胚胎相比,谷氨酸水平升高,可能是由于其通过α-酮戊二酸和铵生成,从而降低了胚胎发育中潜在有害的铵水平。使用1H NMR,识别真实植入/妊娠的敏感性为88.2%,准确预测非植入/妊娠的特异性为88.2%。选择过程曾经是一场“选美比赛”,仅仅是评估胚胎的外观,很快将包括代谢、蛋白质和基因组标记作为评估标准。[11]微流体学是一个多学科的研究和设计领域,通过小尺度的几何约束来精确控制和操纵流体行为,使表面力优于体积力。到目前为止,微流体也在宏观上得到了处理:(2)精确控制的流体配子/胚胎操作;(3)为培养提供仿生环境;(4)促进微尺度遗传和分子生物分析;(5)实现小型化和自动化。[11]采用自动化试管婴儿系统将提供多种优势:工作流程的标准化,减少错误,降低成本,减少污染,以及通过机器学习对系统进行增量改进的潜力。另一个难以控制的领域是消除某些遗传疾病。随着携带者筛查成本的降低和检测到的突变数量的增加,大量新的患者可能被确定为携带者,并通过植入前基因检测(PGT)进行试管婴儿(IVF)来建立他们的家庭。事实上,年轻人的人口基因组筛查可以通过预防罕见疾病和癌症来节省大量的医疗费用。PGT的未来应用可能扩展到多因素疾病和全外显子组筛查。然而,目前将基于多基因评分的胚胎选择引入临床实践的尝试似乎为时过早,而且充满了伦理挑战。最近微操作技术的改进和CRISPR-Cas9基因编辑工具的发展提高了生殖系基因组修饰(GGM)治疗严重单基因疾病的前景。事实上,GGM已经在动物胚胎中实现了。用于预防遗传性线粒体DNA疾病的线粒体替代疗法甚至比GGM进一步发展,在英国已经进行了临床试验。现在,即使选择了最好的胚胎,也不能保证植入。在定制子宫内膜容受性试验(ERA)移植周期出现后,子宫内膜构建的未来将是积极帮助对抗,粘连和植入的药物和成像技术,这些技术将在测量B-hcg之前告诉我们胚胎是否已经植入,从而防止黄体期支持周期的浪费。体外可获得类似子宫的动态环境,具有最佳的动态氧气和营养物质,使胚胎的冒昧着床更加确定。在延时机器中,我们将积极地看到胚胎的生长,甚至超过了囊胚阶段。一个具有理想基因组成的3d胎儿,消除了遗传疾病的可能性,具有更好的代际健康,在人工动态子宫般的环境中生长,由人工智能调节,所有这些都在伦理范围内,这是世界所期待的生育未来。最后,我们希望向杰出的作家奥尔德斯·赫胥黎致敬,他的著名杰作《美丽新世界》写于1932年,是这篇文章背后的推动力。我们发现自己比以往任何时候都更接近他对人类生育能力的先见之明。他对人类胎儿基因操作的期望通过应用“CRISPR”基因编辑技术变成了一个可以想象的前景,尽管目前由于伦理限制仅限于动物模型。同样,作者关于在子宫外培育胎儿的设想也取得了显著进展,人工胎盘技术的进步证明了这一点。绵羊和仔猪的试验已经取得了成功,[12]对极早产儿的人体试验也在进行中。今天,我们见证了奥尔德斯·赫胥黎编织的科幻小说与现实不再有难以置信的距离。 财政支持及赞助无。利益冲突没有利益冲突。
{"title":"<i>In vitro</i> fertilization: From science fiction to reality and beyond","authors":"Sushil Kumar, Pradnya Dongargaonkar","doi":"10.4103/mgmj.mgmj_196_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_196_23","url":null,"abstract":"The renowned author Aldous Huxley captured global interest in the potential of laboratory-born babies rather than traditional childbirth with his iconic 1932 science fiction novel “Brave New World.”[1] While some initial progress has been made in this direction, his projections regarding human fertility largely remain within the realm of speculative fiction, eagerly awaiting the eventual realization. The idea of being able to overcome barriers in procreation has roots deep-seated back to 1890—when a British zoologist Walter Heape showed that it was possible to transfer embryos when he put Angora-fertilized eggs into a Belgian Hare doe rabbit, which gave birth to Angora offspring metaphorically an interspecies surrogacy. It was far later than this that the first birth of an in vitro fertilization (IVF) baby was witnessed in 1978. Robert Edwards and Patrick Steptoe published this report in The Lancet titled “Birth after Reimplantation of a Human Embryo.”[2] Since its clinical introduction, IVF has redefined the ability of the human species to procreate. From baby Louis to now 30 years later, about 3 million babies have been born with IVF. Even though IVF benefits infertile couples, about 10% of all the beneficiaries are not restricted to just them. Clinical indications for IVF have rapidly expanded to include medical and genetic conditions and fertility preservation.[3] An additional driver of IVF utilization is the growing societal acceptance of nontraditional families, including single and same-sex parents, and social media that has opened our minds to think beyond the conventional.[4] The Career driven society has the option of oocyte freezing available, which later form healthy embryos defying age bars and has revolutionized the modern concept of fertility assistance. IVF involves a series of steps, including controlled ovarian hyperstimulation, oocyte retrieval, fertilization, embryo culture, embryo selection, and embryo transfer. A significant limitation of this technique is the inability to enhance the quality of obtained oocytes or sperm. In response to this challenge, efforts have been directed toward augmenting the number of collected eggs or sperm.[5] Despite using expert and stringent morphological criteria to choose embryos, only 52.3% of 2.3 transferred embryos typically result in a live birth.[6] This creates a significant 48% margin of uncertainty that proves challenging to surmount, mainly attributed to the absence of techniques for enhancing gamete quality. Furthermore, the uncertain implantation outcome necessitates transferring more embryos to counterbalance this unpredictability. Using multiple embryos during transfer contributes to a significant rate of multiple births, reaching about 30% in patients undergoing assisted reproductive techniques (ARTs). This circumstance adds to the associated perinatal morbidity, resulting in implications such as preterm birth, prolonged stay in the neonatal intensive care unit (NICU), heighten","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136367873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4103/mgmj.mgmj_117_23
Varsha Pandey, Vishal Kulkarni, Vanita Bhaskar, Veenapani Mire
Background: There has been a notable rise in cardiac-related fatalities globally, especially in the last five decades. In India, ischemic heart disease has become prevalent, affecting roughly 10% of the population. For forensic specialists, establishing the cause of death in individuals previously in good health can be complex. Autopsies are crucial in evaluating the underlying factors responsible for such deaths. This research seeks to identify and scrutinize a wide range of histopathological heart abnormalities that significantly influence the determination of the cause of death. Materials and Methods: This study was conducted in the Department of Pathology from January 2020 to December 2020. During this period, we received a total of 209 whole heart specimens. Of these, 208 specimens underwent comprehensive examinations, including macroscopic and microscopic observations. Results: Out of the 208 cases examined, 94 showed evidence of both early and advanced atherosclerosis, whereas 65 showed early and late signs of myocardial infarction. Myocardial hypertrophy was evident in 29 patients. Isolated instances of myocarditis and pericarditis were observed in one case each. Fatty streaks were identified in 32 cases; three showed red blood cells with sickle cell morphology. Heart rupture was detected in one case, and another revealed metastasis from a poorly differentiated tumor. Notably, in 90 cases, the cause of death remained undetermined despite thorough macroscopic and microscopic autopsies. Conclusion: The primary reason for cardiovascular fatalities is atherosclerosis-related myocardial infarction.
{"title":"Spectrum of histopathological lesions of heart: An autopsy study at tertiary care center","authors":"Varsha Pandey, Vishal Kulkarni, Vanita Bhaskar, Veenapani Mire","doi":"10.4103/mgmj.mgmj_117_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_117_23","url":null,"abstract":"Background: There has been a notable rise in cardiac-related fatalities globally, especially in the last five decades. In India, ischemic heart disease has become prevalent, affecting roughly 10% of the population. For forensic specialists, establishing the cause of death in individuals previously in good health can be complex. Autopsies are crucial in evaluating the underlying factors responsible for such deaths. This research seeks to identify and scrutinize a wide range of histopathological heart abnormalities that significantly influence the determination of the cause of death. Materials and Methods: This study was conducted in the Department of Pathology from January 2020 to December 2020. During this period, we received a total of 209 whole heart specimens. Of these, 208 specimens underwent comprehensive examinations, including macroscopic and microscopic observations. Results: Out of the 208 cases examined, 94 showed evidence of both early and advanced atherosclerosis, whereas 65 showed early and late signs of myocardial infarction. Myocardial hypertrophy was evident in 29 patients. Isolated instances of myocarditis and pericarditis were observed in one case each. Fatty streaks were identified in 32 cases; three showed red blood cells with sickle cell morphology. Heart rupture was detected in one case, and another revealed metastasis from a poorly differentiated tumor. Notably, in 90 cases, the cause of death remained undetermined despite thorough macroscopic and microscopic autopsies. Conclusion: The primary reason for cardiovascular fatalities is atherosclerosis-related myocardial infarction.","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136368294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The study evaluated the significance of acute flaccid paralysis (AFP) surveillance within the polio eradication strategy and its integral role in the overall eradication efforts. Specifically, the research assessed the implementation of AFP surveillance and its management at reporting sites in Northern Nigeria. Materials and Methods: This study utilized quantitative research methods, including administering interviewer-administered questionnaires to health facility staff and caregivers of children within the community. The research was conducted between May and July 2019 and involved 592 participating health facilities enrolled in the AFP surveillance program for polio eradication. These facilities were spread across 11 states in Nigeria’s Northeast and Northwest regions. Data were analyzed using IBM Statistical Package for Social Sciences (SPSS) Statistics for Windows, Version 25.0 (Armonk, New York). Results: A total of 171 AFP cases were reported 6 months prior, with the highest proportion recorded in Kano (18.7%), Bauchi, and Kaduna (13.5% each) states. Most cases were seen in rural areas (73.1%), with an average of 1–3 cases (80.8%). Of the 171 AFP cases reported in the past 6 months, >90% were investigated, >80% had a complete clinical investigation, and >70 were followed up for residual paralysis examination. Most rural health facilities (>80%) had 1–3 trained staff compared with 70.9% of facilities in urban areas. On the other hand, the proportion of facilities in urban areas with 4–6 trained staff was almost double rural area facilities (18.4% vs. 9.8%). It was a surprise that a higher proportion of pastoral facility staff was able to define AFP correctly (94%) than urban facilities with 85.1% (P < 0.05). Also, AFP surveillance and management were better in rural facilities than in urban. Conclusion: According to the research findings, the AFP surveillance system in the northern region demonstrated strong performance. However, urban and rural healthcare providers require regular training in AFP surveillance to maintain practical surveillance standards.
背景:本研究评估了急性弛缓性麻痹(AFP)监测在脊髓灰质炎根除战略中的意义及其在整体根除工作中的不可或缺的作用。具体而言,该研究评估了尼日利亚北部报告地点AFP监测的实施情况及其管理。材料和方法:本研究采用定量研究方法,包括对社区内卫生机构工作人员和儿童照顾者进行访谈问卷调查。这项研究是在2019年5月至7月期间进行的,涉及592家参与AFP根除脊髓灰质炎监测项目的医疗机构。这些设施分布在尼日利亚东北部和西北部的11个州。数据分析使用IBM Statistical Package for Social Sciences (SPSS) Statistics for Windows, Version 25.0 (Armonk, New York)。结果:6个月前共报告了171例AFP病例,其中卡诺州(18.7%)、包奇州和卡杜纳州(13.5%)的比例最高。以农村地区居多(73.1%),平均1 ~ 3例(80.8%)。在近6个月报告的171例AFP病例中,bbb90 %进行了调查,>80%进行了完整的临床调查,>70例进行了残瘫检查。大多数农村卫生设施(80%)有1-3名受过培训的工作人员,而城市地区的卫生设施有70.9%。另一方面,拥有4-6名受过培训的员工的城市医疗机构所占比例几乎是农村医疗机构的两倍(18.4%比9.8%)。令人惊讶的是,牧区设施工作人员能够正确定义AFP的比例(94%)高于城市设施的85.1% (P < 0.05)。农村AFP监测和管理优于城市。结论:根据研究结果,AFP监测系统在北方地区表现良好。然而,城市和农村的卫生保健提供者需要定期接受AFP监测方面的培训,以维持实际的监测标准。
{"title":"Acute flaccid paralysis surveillance and management in rural and urban reporting sites in Northern Nigeria","authors":"AliJ Onoja, FelixO Sanni, JamesD Babarinde, SheilaI Onoja, ModupeT Babarinde","doi":"10.4103/mgmj.mgmj_147_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_147_23","url":null,"abstract":"Background: The study evaluated the significance of acute flaccid paralysis (AFP) surveillance within the polio eradication strategy and its integral role in the overall eradication efforts. Specifically, the research assessed the implementation of AFP surveillance and its management at reporting sites in Northern Nigeria. Materials and Methods: This study utilized quantitative research methods, including administering interviewer-administered questionnaires to health facility staff and caregivers of children within the community. The research was conducted between May and July 2019 and involved 592 participating health facilities enrolled in the AFP surveillance program for polio eradication. These facilities were spread across 11 states in Nigeria’s Northeast and Northwest regions. Data were analyzed using IBM Statistical Package for Social Sciences (SPSS) Statistics for Windows, Version 25.0 (Armonk, New York). Results: A total of 171 AFP cases were reported 6 months prior, with the highest proportion recorded in Kano (18.7%), Bauchi, and Kaduna (13.5% each) states. Most cases were seen in rural areas (73.1%), with an average of 1–3 cases (80.8%). Of the 171 AFP cases reported in the past 6 months, >90% were investigated, >80% had a complete clinical investigation, and >70 were followed up for residual paralysis examination. Most rural health facilities (>80%) had 1–3 trained staff compared with 70.9% of facilities in urban areas. On the other hand, the proportion of facilities in urban areas with 4–6 trained staff was almost double rural area facilities (18.4% vs. 9.8%). It was a surprise that a higher proportion of pastoral facility staff was able to define AFP correctly (94%) than urban facilities with 85.1% (P < 0.05). Also, AFP surveillance and management were better in rural facilities than in urban. Conclusion: According to the research findings, the AFP surveillance system in the northern region demonstrated strong performance. However, urban and rural healthcare providers require regular training in AFP surveillance to maintain practical surveillance standards.","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136368306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shashikant S Gadhe, S. Kale, A. Chalak, A. Vatkar, S. Doshi, Joydeep Dey
Introduction: Professional indemnity (PI) or medical malpractice insurance (MMI) has been a hot topic considering the increasing number of medical negligence cases rising worldwide. However, there is a palpable difference in understanding and usage of this tool in developed countries and regions such as India. Aim: This study aimed to analyze the general understanding of resident doctors and consultants about MMI and knowledge about its technical jargon. Materials and Methods: We distributed short Google Form questionnaires about various aspects of MMI. We recorded the data from 141 resident doctors and 42 consultants in the Navi Mumbai area of India. As it was a survey, we required no ethical review. Results: As consultants’ experience grew, so did their understanding of medical indemnity. Approximately 90%, 64%, and 22% of consultants with 10 years, 5–10 years, and 5 years of experience had acquired PI. The AOY:AOT (any one year:anyone time) ratio was known to just 35% of these specialists. About half of the resident doctors were aware of PI and the effects of medical specialization on PI. Around a fifth of the individuals had only acquired the PI. Conclusion: There needs to be more clarity between the need and knowledge of MMI in India. This needs to be addressed by teaching medical postgraduates about it during training. “There should be special emphasis on medical indemnity in terms of its need, clauses, and cost during postgraduate medical training.”
{"title":"Professional indemnity/medical malpractice insurance—Awareness among medical students and consultants of India: An online survey study","authors":"Shashikant S Gadhe, S. Kale, A. Chalak, A. Vatkar, S. Doshi, Joydeep Dey","doi":"10.4103/mgmj.mgmj_4_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_4_23","url":null,"abstract":"Introduction: Professional indemnity (PI) or medical malpractice insurance (MMI) has been a hot topic considering the increasing number of medical negligence cases rising worldwide. However, there is a palpable difference in understanding and usage of this tool in developed countries and regions such as India. Aim: This study aimed to analyze the general understanding of resident doctors and consultants about MMI and knowledge about its technical jargon. Materials and Methods: We distributed short Google Form questionnaires about various aspects of MMI. We recorded the data from 141 resident doctors and 42 consultants in the Navi Mumbai area of India. As it was a survey, we required no ethical review. Results: As consultants’ experience grew, so did their understanding of medical indemnity. Approximately 90%, 64%, and 22% of consultants with 10 years, 5–10 years, and 5 years of experience had acquired PI. The AOY:AOT (any one year:anyone time) ratio was known to just 35% of these specialists. About half of the resident doctors were aware of PI and the effects of medical specialization on PI. Around a fifth of the individuals had only acquired the PI. Conclusion: There needs to be more clarity between the need and knowledge of MMI in India. This needs to be addressed by teaching medical postgraduates about it during training. “There should be special emphasis on medical indemnity in terms of its need, clauses, and cost during postgraduate medical training.”","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"10 1","pages":"38 - 42"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43443965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Men and women are equally affected by low back pain (LBP), which can range in intensity from a dull, constant ache to a sudden, sharp sensation that hinders the person. Pain can begin abruptly due to an accident or by lifting something heavy, or it can develop over time due to age-related changes in the spine. LBP is one of the primary healthcare problems in all developing countries; nurses play a vital role in giving different interventions to treat back pain effectively. This study aims to study the effect of selected physical exercise on LBP among patients attending the outpatient department (OPD) in selected hospitals. Materials and Methods: A quasi-experimental, one-group pretest–posttest time series research design was used to conduct a study among the patients attending OPD in selected hospitals. A total of 160 respondents were studied from October 2021 to February 2022. A numerical pain scale and a semistructured self-administered questionnaire were used to collect data. Only respondents who gave informed consent were issued the questionnaire to complete at their convenience. Physical exercises were demonstrated and done by patients for 6 weeks, thrice a day in a week for 30 min regularly. Data were analyzed using Statistical Package for Social Sciences (SPSS) version 24.0. Descriptive data were presented in the form of bar graphs and frequency tables. Results: The study showed that 59.37% of the respondents had severe LBP in the pretest. After doing selected physical exercises, the severe pain level reduced to 56.25% in post-test-1, 32.5% in post-test-2, and 14.37% in post-test-3. The t value of the difference in mean reduction of LBP was tabulated, and the calculated t values were (0.78, 5.60, 9.64) statistically significant at 0.05 level of significance (P < 0.05). Conclusion: LBP is seen as an issue for all ages and all sectors of society. One common component of pain treatment programs focuses on increased physical exercise reconditioning, and exercise would increase strength and concomitantly decrease pain as a long-term effect. The investigator found that physical activities were very effective and beneficial in reducing back pain among patients with LBP.
{"title":"Effect of selected physical exercises on low back pain","authors":"Archana Badhe, Marudhar Aman, D. Sonawane","doi":"10.4103/mgmj.mgmj_29_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_29_23","url":null,"abstract":"Background: Men and women are equally affected by low back pain (LBP), which can range in intensity from a dull, constant ache to a sudden, sharp sensation that hinders the person. Pain can begin abruptly due to an accident or by lifting something heavy, or it can develop over time due to age-related changes in the spine. LBP is one of the primary healthcare problems in all developing countries; nurses play a vital role in giving different interventions to treat back pain effectively. This study aims to study the effect of selected physical exercise on LBP among patients attending the outpatient department (OPD) in selected hospitals. Materials and Methods: A quasi-experimental, one-group pretest–posttest time series research design was used to conduct a study among the patients attending OPD in selected hospitals. A total of 160 respondents were studied from October 2021 to February 2022. A numerical pain scale and a semistructured self-administered questionnaire were used to collect data. Only respondents who gave informed consent were issued the questionnaire to complete at their convenience. Physical exercises were demonstrated and done by patients for 6 weeks, thrice a day in a week for 30 min regularly. Data were analyzed using Statistical Package for Social Sciences (SPSS) version 24.0. Descriptive data were presented in the form of bar graphs and frequency tables. Results: The study showed that 59.37% of the respondents had severe LBP in the pretest. After doing selected physical exercises, the severe pain level reduced to 56.25% in post-test-1, 32.5% in post-test-2, and 14.37% in post-test-3. The t value of the difference in mean reduction of LBP was tabulated, and the calculated t values were (0.78, 5.60, 9.64) statistically significant at 0.05 level of significance (P < 0.05). Conclusion: LBP is seen as an issue for all ages and all sectors of society. One common component of pain treatment programs focuses on increased physical exercise reconditioning, and exercise would increase strength and concomitantly decrease pain as a long-term effect. The investigator found that physical activities were very effective and beneficial in reducing back pain among patients with LBP.","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"10 1","pages":"43 - 47"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46690675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Sleep is a modifiable risk factor for many chronic diseases, including type 2 diabetes mellitus. Poor quality of sleep leads to poor management of diabetes, adversely affecting sleep. The vicious cycle can be curtailed by good quality of sleep. Our study observed the association of glycemic control (glycosylated hemoglobin [HbA1C]) with sleep quality. Materials and Methods: A cross-sectional, observational study was conducted in the Diabetic Clinic of MGM Hospital, Kamothe, Navi Mumbai, India. Type 2 diabetes patients in the age group of 30–60 years were assessed for sleep quality using the Pittsburgh Sleep Quality Index questionnaire, and their HbA1C was measured by high-performance liquid chromatography. Results: A total of 101 type 2 diabetes patients aged 30–60 were assessed. A total of 25% were good sleepers, and 75% were poor sleepers. The mean ± standard deviation of HbA1C in good sleepers was 7.14 ± 1.30, and in poor sleepers was 8.9 ± 2.44. The correlation between sleep quality and glycemic control gave an r value of 0.36, and the P value was 0.002, which shows a highly significant correlation between poor sleep and glycemic control. Conclusion: The study shows poor sleep quality leads to poor glycemic control in type 2 diabetic patients with higher HbA1c levels. Creating awareness among diabetic patients about the good quality and duration of sleep for better management of diabetes is essential.
{"title":"Association of sleep quality and glycemic control in type 2 diabetes mellitus","authors":"Rita Khadkikar, Sweta Bhagat, Sandeep Rai","doi":"10.4103/mgmj.mgmj_72_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_72_23","url":null,"abstract":"Background: Sleep is a modifiable risk factor for many chronic diseases, including type 2 diabetes mellitus. Poor quality of sleep leads to poor management of diabetes, adversely affecting sleep. The vicious cycle can be curtailed by good quality of sleep. Our study observed the association of glycemic control (glycosylated hemoglobin [HbA1C]) with sleep quality. Materials and Methods: A cross-sectional, observational study was conducted in the Diabetic Clinic of MGM Hospital, Kamothe, Navi Mumbai, India. Type 2 diabetes patients in the age group of 30–60 years were assessed for sleep quality using the Pittsburgh Sleep Quality Index questionnaire, and their HbA1C was measured by high-performance liquid chromatography. Results: A total of 101 type 2 diabetes patients aged 30–60 were assessed. A total of 25% were good sleepers, and 75% were poor sleepers. The mean ± standard deviation of HbA1C in good sleepers was 7.14 ± 1.30, and in poor sleepers was 8.9 ± 2.44. The correlation between sleep quality and glycemic control gave an r value of 0.36, and the P value was 0.002, which shows a highly significant correlation between poor sleep and glycemic control. Conclusion: The study shows poor sleep quality leads to poor glycemic control in type 2 diabetic patients with higher HbA1c levels. Creating awareness among diabetic patients about the good quality and duration of sleep for better management of diabetes is essential.","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136366366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.4103/mgmj.mgmj_264_22
MahmoodD Al-Mendalawi
Dear Editor, We stumbled upon an engaging research paper published in Volume 9, Issue 4 of the MGM Journal of Medical Sciences, spanning pages 517-21, October to December 2022. Within this study, Bavejaet al.[1] utilized both serology and the enzyme-linked immunosorbent assay (ELISA) technique. The seroprevalence of Leptospira spp. was 19.78% in a cohort of Indian patients. The diagnosis of leptospirosis usually depends on serology and molecular detection. The serology often takes many days before the result becomes positive after the start of the illness. In addition, the serology necessitates skilled handling and maintaining live Leptospira cells representing all serogroups.[2] The molecular diagnostic techniques, including real-time polymerase chain reaction (PCR), are faster and more sensitive to firm the diagnosis and could also detect the infection before the appearance of antibodies. On targeting the lip L32 gene, PCR could detect Leptospira deoxyribonucleic acid (DNA) in various clinical samples.[3] We believe that if Baveja et al.[1] used PCR in the study methodology rather than ELISA, a more accurate estimate of the leptospirosis seroprevalence might be generated. Despite that limitation, the reported substantial leptospirosis seroprevalence (19.78%),[1] which is nearly comparable to 26.6% recently reported by Shukla et al.,[4] urges the need to implement strict public health interventions to combat that harmful zoonotic infection in India. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
{"title":"Seroprevalence of acute leptospirosis in a tertiary care hospital in western India","authors":"MahmoodD Al-Mendalawi","doi":"10.4103/mgmj.mgmj_264_22","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_264_22","url":null,"abstract":"Dear Editor, We stumbled upon an engaging research paper published in Volume 9, Issue 4 of the MGM Journal of Medical Sciences, spanning pages 517-21, October to December 2022. Within this study, Bavejaet al.[1] utilized both serology and the enzyme-linked immunosorbent assay (ELISA) technique. The seroprevalence of Leptospira spp. was 19.78% in a cohort of Indian patients. The diagnosis of leptospirosis usually depends on serology and molecular detection. The serology often takes many days before the result becomes positive after the start of the illness. In addition, the serology necessitates skilled handling and maintaining live Leptospira cells representing all serogroups.[2] The molecular diagnostic techniques, including real-time polymerase chain reaction (PCR), are faster and more sensitive to firm the diagnosis and could also detect the infection before the appearance of antibodies. On targeting the lip L32 gene, PCR could detect Leptospira deoxyribonucleic acid (DNA) in various clinical samples.[3] We believe that if Baveja et al.[1] used PCR in the study methodology rather than ELISA, a more accurate estimate of the leptospirosis seroprevalence might be generated. Despite that limitation, the reported substantial leptospirosis seroprevalence (19.78%),[1] which is nearly comparable to 26.6% recently reported by Shukla et al.,[4] urges the need to implement strict public health interventions to combat that harmful zoonotic infection in India. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136367857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objectives: Determining fetal weight is crucial in effectively managing labor and delivery. It helps make informed decisions about the most suitable delivery method and also aids in identifying conditions such as low birth weight, macrosomia, and intrauterine growth restriction in the fetus. In settings where ultrasound may not be readily available due to limited resources, it becomes essential to assess how accurately fetal weight can be estimated clinically by comparing it to ultrasound measurements and the actual birth weight. This study aimed to assess fetal weight in full-term pregnancies using clinical and ultrasound methods and compare their accuracy while examining their correlation with birth weight. Materials and Methods: This cross-sectional observational and comparative study included 200 women in their full-term pregnancies. The study was conducted from November 2019 to October 2021. In this research, we calculated birth weight estimates using clinical methods (using Johnson’s and Dare’s formulas) and ultrasound (using Hadlock’s procedure). Subsequently, these estimated weights were compared to the actual birth weight data. Results: The findings of this study revealed that Hadlock’s ultrasound formula offered the most precise fetal weight estimates, with Dare’s clinical method following closely. Clinical and ultrasound estimations notably showed a significant positive correlation with birth weight. Conclusion: The ultrasound method is superior in accurately assessing birth weight compared to the clinical approach. Consequently, it is advisable to prioritize ultrasound whenever available and feasible.
{"title":"Comparative analysis of clinical and sonographic estimation of fetal weight in term pregnancy at a tertiary care hospital","authors":"Sarvamangala B, Shobha Patil, Vidyashree Malipatil","doi":"10.4103/mgmj.mgmj_199_23","DOIUrl":"https://doi.org/10.4103/mgmj.mgmj_199_23","url":null,"abstract":"Objectives: Determining fetal weight is crucial in effectively managing labor and delivery. It helps make informed decisions about the most suitable delivery method and also aids in identifying conditions such as low birth weight, macrosomia, and intrauterine growth restriction in the fetus. In settings where ultrasound may not be readily available due to limited resources, it becomes essential to assess how accurately fetal weight can be estimated clinically by comparing it to ultrasound measurements and the actual birth weight. This study aimed to assess fetal weight in full-term pregnancies using clinical and ultrasound methods and compare their accuracy while examining their correlation with birth weight. Materials and Methods: This cross-sectional observational and comparative study included 200 women in their full-term pregnancies. The study was conducted from November 2019 to October 2021. In this research, we calculated birth weight estimates using clinical methods (using Johnson’s and Dare’s formulas) and ultrasound (using Hadlock’s procedure). Subsequently, these estimated weights were compared to the actual birth weight data. Results: The findings of this study revealed that Hadlock’s ultrasound formula offered the most precise fetal weight estimates, with Dare’s clinical method following closely. Clinical and ultrasound estimations notably showed a significant positive correlation with birth weight. Conclusion: The ultrasound method is superior in accurately assessing birth weight compared to the clinical approach. Consequently, it is advisable to prioritize ultrasound whenever available and feasible.","PeriodicalId":52587,"journal":{"name":"MGM Journal of Medical Sciences","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136367876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}