Development of a quality-assessment tool for experimental bruxism studies: reliability and validity.

Andreas Dawson, Karen G Raphael, Alan Glaros, Susanna Axelsson, Taro Arima, Malin Ernberg, Mauro Farella, Frank Lobbezoo, Daniele Manfredini, Ambra Michelotti, Peter Svensson, Thomas List
{"title":"Development of a quality-assessment tool for experimental bruxism studies: reliability and validity.","authors":"Andreas Dawson,&nbsp;Karen G Raphael,&nbsp;Alan Glaros,&nbsp;Susanna Axelsson,&nbsp;Taro Arima,&nbsp;Malin Ernberg,&nbsp;Mauro Farella,&nbsp;Frank Lobbezoo,&nbsp;Daniele Manfredini,&nbsp;Ambra Michelotti,&nbsp;Peter Svensson,&nbsp;Thomas List","doi":"10.11607/jop.1065","DOIUrl":null,"url":null,"abstract":"<p><strong>Aims: </strong>To combine empirical evidence and expert opinion in a formal consensus method in order to develop a quality-assessment tool for experimental bruxism studies in systematic reviews.</p><p><strong>Methods: </strong>Tool development comprised five steps: (1) preliminary decisions, (2) item generation, (3) face-validity assessment, (4) reliability and discriminitive validity assessment, and (5) instrument refinement. The kappa value and phi-coefficient were calculated to assess inter-observer reliability and discriminative ability, respectively.</p><p><strong>Results: </strong>Following preliminary decisions and a literature review, a list of 52 items to be considered for inclusion in the tool was compiled. Eleven experts were invited to join a Delphi panel and 10 accepted. Four Delphi rounds reduced the preliminary tool-Quality-Assessment Tool for Experimental Bruxism Studies (Qu-ATEBS)- to 8 items: study aim, study sample, control condition or group, study design, experimental bruxism task, statistics, interpretation of results, and conflict of interest statement. Consensus among the Delphi panelists yielded good face validity. Inter-observer reliability was acceptable (k = 0.77). Discriminative validity was excellent (phi coefficient 1.0; P < .01). During refinement, 1 item (no. 8) was removed.</p><p><strong>Conclusion: </strong>Qu-ATEBS, the seven-item evidence-based quality assessment tool developed here for use in systematic reviews of experimental bruxism studies, exhibits face validity, excellent discriminative validity, and acceptable inter-observer reliability. Development of quality assessment tools for many other topics in the orofacial pain literature is needed and may follow the described procedure.</p>","PeriodicalId":16649,"journal":{"name":"Journal of orofacial pain","volume":"27 2","pages":"111-22"},"PeriodicalIF":0.0000,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.11607/jop.1065","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of orofacial pain","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11607/jop.1065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Aims: To combine empirical evidence and expert opinion in a formal consensus method in order to develop a quality-assessment tool for experimental bruxism studies in systematic reviews.

Methods: Tool development comprised five steps: (1) preliminary decisions, (2) item generation, (3) face-validity assessment, (4) reliability and discriminitive validity assessment, and (5) instrument refinement. The kappa value and phi-coefficient were calculated to assess inter-observer reliability and discriminative ability, respectively.

Results: Following preliminary decisions and a literature review, a list of 52 items to be considered for inclusion in the tool was compiled. Eleven experts were invited to join a Delphi panel and 10 accepted. Four Delphi rounds reduced the preliminary tool-Quality-Assessment Tool for Experimental Bruxism Studies (Qu-ATEBS)- to 8 items: study aim, study sample, control condition or group, study design, experimental bruxism task, statistics, interpretation of results, and conflict of interest statement. Consensus among the Delphi panelists yielded good face validity. Inter-observer reliability was acceptable (k = 0.77). Discriminative validity was excellent (phi coefficient 1.0; P < .01). During refinement, 1 item (no. 8) was removed.

Conclusion: Qu-ATEBS, the seven-item evidence-based quality assessment tool developed here for use in systematic reviews of experimental bruxism studies, exhibits face validity, excellent discriminative validity, and acceptable inter-observer reliability. Development of quality assessment tools for many other topics in the orofacial pain literature is needed and may follow the described procedure.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
磨牙症实验研究质量评估工具之发展:信度与效度。
目的:将经验证据和专家意见以正式的共识方法结合起来,为磨牙症实验研究开发一种系统评价的质量评估工具。方法:工具开发包括五个步骤:(1)初步决策,(2)项目生成,(3)面效度评估,(4)信度和判别效度评估,(5)工具改进。计算kappa值和phi系数分别评估观察者间信度和判别能力。结果:根据初步决定和文献综述,编制了一份包含52个项目的清单,以考虑纳入该工具。11位专家被邀请加入德尔菲小组,10位被接受。四轮德尔菲将初步工具-实验磨牙症研究质量评估工具(quatebs)-减少到8个项目:研究目标、研究样本、控制条件或组、研究设计、实验磨牙症任务、统计、结果解释和利益冲突声明。德尔菲小组成员的共识产生了良好的面孔效度。观察者间信度可接受(k = 0.77)。判别效度极佳(phi系数1.0;P < 0.01)。在细化过程中,1个项目(no。8)被移除。结论:本文开发的七项基于证据的质量评估工具quatebs,用于实验性磨牙症研究的系统评价,具有正面效度,出色的判别效度和可接受的观察者间信度。开发质量评估工具的许多其他主题在口面部疼痛的文献是必要的,可以遵循所描述的程序。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of orofacial pain
Journal of orofacial pain 医学-牙科与口腔外科
自引率
0.00%
发文量
0
审稿时长
>12 weeks
期刊最新文献
Way forward Systematic review and meta-analysis of randomized controlled trials evaluating intraoral orthopedic appliances for temporomandibular disorders. Neuroplasticity in the adaptation to prosthodontic treatment. Temporomandibular disorder pain after whiplash trauma: a systematic review. Why seek treatment for temporomandibular disorder pain complaints? A study based on semi-structured interviews.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1