Pub Date : 2023-10-07DOI: 10.1016/j.rmal.2023.100085
Hyun-Bin Hwang, Charlene Polio
Automated tools are widely used to assess syntactic complexity in second language (L2) writing studies; however, the effects of text length on syntactic complexity indices remain unclear. This can pose a challenge when studying underrepresented populations (e.g., young learners, adults with limited literacy skills), as their lower proficiency may result in less text production. To address this issue, we investigated the minimum text length threshold at which automated measures of syntactic complexity become the most reliable while considering L2 proficiency and prompt topic. Essays from the ICNALE corpus, a dataset of 5,200 essays with four proficiency levels, were used to create a dataset of texts of varying lengths (50, 100, 150, and 200 words). Mixed-effects regression models showed that seven out of 14 indices were not affected by text length regardless of learner proficiency and prompt topic. The other seven differed only between the 50- and 200-word texts within intermediate levels. We suggest a minimum of 100 words as a conservative threshold for the reliability of syntactic complexity indices. Finally, we emphasize the importance of transparent reporting practice regarding text length information.
{"title":"Text length effects on the reliability of syntactic complexity indices","authors":"Hyun-Bin Hwang, Charlene Polio","doi":"10.1016/j.rmal.2023.100085","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100085","url":null,"abstract":"<div><p>Automated tools are widely used to assess syntactic complexity in second language (L2) writing studies; however, the effects of text length on syntactic complexity indices remain unclear. This can pose a challenge when studying underrepresented populations (e.g., young learners, adults with limited literacy skills), as their lower proficiency may result in less text production. To address this issue, we investigated the minimum text length threshold at which automated measures of syntactic complexity become the most reliable while considering L2 proficiency and prompt topic. Essays from the ICNALE corpus, a dataset of 5,200 essays with four proficiency levels, were used to create a dataset of texts of varying lengths (50, 100, 150, and 200 words). Mixed-effects regression models showed that seven out of 14 indices were not affected by text length regardless of learner proficiency and prompt topic. The other seven differed only between the 50- and 200-word texts within intermediate levels. We suggest a minimum of 100 words as a conservative threshold for the reliability of syntactic complexity indices. Finally, we emphasize the importance of transparent reporting practice regarding text length information.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-06DOI: 10.1016/j.rmal.2023.100082
Matt Kessler, Francesca Marino, Dacota Liska
Ethnography – a methodological staple of applied linguistics research since the field's inception – has well-established ethical guidelines. Although ethnographic research has traditionally been subject to institutional ethics review board protocols, the expansion of ethnography into online spaces, which has been recharacterized as netnography, has presented novel ethical challenges (e.g., determining what constitutes ‘public’ versus ‘private’ data, protecting participants’ identities, and more). To better understand how researchers have handled such ethical challenges, this study systematically reviews the data collection and reporting practices of peer-reviewed netnographic research in applied linguistics. High-impact journals were searched using specific criteria, resulting in 60 studies published in 14 journals during the span of 2000–2022. These studies were coded to examine how common issues were handled, such as: gaining informed consent, obtaining permissions (from companies and their representatives to use data), and protecting participants’ identities. Data analyses revealed that, while such ethical issues are a consideration for many researchers, there is still ample room for improvement when it comes to ethical decision-making. Based on our review, in the discussion, we provide suggestions for those who intend to conduct netnographic research in the future.
{"title":"Netnographic research ethics in applied linguistics: A systematic review of data collection and reporting practices","authors":"Matt Kessler, Francesca Marino, Dacota Liska","doi":"10.1016/j.rmal.2023.100082","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100082","url":null,"abstract":"<div><p>Ethnography – a methodological staple of applied linguistics research since the field's inception – has well-established ethical guidelines. Although ethnographic research has traditionally been subject to institutional ethics review board protocols, the expansion of ethnography into online spaces, which has been recharacterized as <em>netnography,</em> has presented novel ethical challenges (e.g., determining what constitutes ‘public’ versus ‘private’ data, protecting participants’ identities, and more). To better understand how researchers have handled such ethical challenges, this study systematically reviews the data collection and reporting practices of peer-reviewed netnographic research in applied linguistics. High-impact journals were searched using specific criteria, resulting in 60 studies published in 14 journals during the span of 2000–2022. These studies were coded to examine how common issues were handled, such as: gaining informed consent, obtaining permissions (from companies and their representatives to use data), and protecting participants’ identities. Data analyses revealed that, while such ethical issues are a consideration for many researchers, there is still ample room for improvement when it comes to ethical decision-making. Based on our review, in the discussion, we provide suggestions for those who intend to conduct netnographic research in the future.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1016/j.rmal.2023.100084
Jonathan Phipps
This study seeks to validate two instruments designed to measure second and foreign language speaking self-efficacy. One-hundred and ninety first- and second-year Japanese university students participated in this study. Analyses based on the Rasch model focused on key aspects of validity based the work of Wolf and Smith (2007) as well as Messick's validation framework. Findings indicated that (1) the instruments displayed high item and person reliability and separation, (2) there was a sufficient spread of items considering the variable response levels, though some redundancies occurred in the items and the final step in the Likert scale for one instrument was beyond the level of endorsement for the participants in the current study, (3) the vast majority of items showed an excellent fit with the Rasch model, (4) the rating scales matched all criteria for optimal functioning proposed by Linacre, (5) the items displayed an acceptable degree of unidimensionality, and (6) the items showed a high degree of measurement invariance. Teachers and researchers may take advantage of the two validated instruments to gage L2 learners’ self-efficacy.
{"title":"The validation of two L2 self-efficacy instruments using rasch analysis","authors":"Jonathan Phipps","doi":"10.1016/j.rmal.2023.100084","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100084","url":null,"abstract":"<div><p>This study seeks to validate two instruments designed to measure second and foreign language speaking self-efficacy. One-hundred and ninety first- and second-year Japanese university students participated in this study. Analyses based on the Rasch model focused on key aspects of validity based the work of Wolf and Smith (2007) as well as Messick's validation framework. Findings indicated that (1) the instruments displayed high item and person reliability and separation, (2) there was a sufficient spread of items considering the variable response levels, though some redundancies occurred in the items and the final step in the Likert scale for one instrument was beyond the level of endorsement for the participants in the current study, (3) the vast majority of items showed an excellent fit with the Rasch model, (4) the rating scales matched all criteria for optimal functioning proposed by Linacre, (5) the items displayed an acceptable degree of unidimensionality, and (6) the items showed a high degree of measurement invariance. Teachers and researchers may take advantage of the two validated instruments to gage L2 learners’ self-efficacy.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-23DOI: 10.1016/j.rmal.2023.100081
Juan Berríos , Angela Swain , Melinda Fricke
The “map task” is an interactive, goal-driven, real-time conversational task used to elicit semi-controlled natural language production data. We present recommendations for creating a bespoke map task that can be tailored to individual research projects and administered online using a chat interface. As proof of concept, we present a case study exemplifying our own implementation, designed to elicit informal written communication in either English or Spanish. Eight experimental maps were created, manipulating linguistic factors including lexical frequency, cognate status, and semantic ambiguity. Participants (N = 40) completed the task in pairs and took turns (i) providing directions based on a pre-traced route, or (ii) following directions to draw the route on an empty map. Computational measures of image similarity (e.g., structural similarity index) between pre-traced and participant-traced routes showed that participants completed the task successfully; we describe use of this method for measuring task success quantitatively. We also provide a comparative analysis of the language elicited in English and Spanish. The most frequently used words were roughly equivalent in both languages, encompassing primarily commands and items on the maps. Similarly, abbreviations, swear words, and slang present in both datasets indicated that the task successfully elicited informal communication. Interestingly, Spanish turns were longer and displayed a wider range of morphologically complex forms. English, conversely, displayed strategies mostly absent in Spanish, such as the use of cardinal directions as a communicative strategy. We consider the online map task as a promising method for examining a variety of phenomena in applied linguistics research.
{"title":"Implementing the map task in applied linguistics research: What, how, and why","authors":"Juan Berríos , Angela Swain , Melinda Fricke","doi":"10.1016/j.rmal.2023.100081","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100081","url":null,"abstract":"<div><p>The “map task” is an interactive, goal-driven, real-time conversational task used to elicit semi-controlled natural language production data. We present recommendations for creating a bespoke map task that can be tailored to individual research projects and administered online using a chat interface. As proof of concept, we present a case study exemplifying our own implementation, designed to elicit informal written communication in either English or Spanish. Eight experimental maps were created, manipulating linguistic factors including lexical frequency, cognate status, and semantic ambiguity. Participants (<em>N</em> = 40) completed the task in pairs and took turns (i) providing directions based on a pre-traced route, or (ii) following directions to draw the route on an empty map. Computational measures of image similarity (e.g., structural similarity index) between pre-traced and participant-traced routes showed that participants completed the task successfully; we describe use of this method for measuring task success quantitatively. We also provide a comparative analysis of the language elicited in English and Spanish. The most frequently used words were roughly equivalent in both languages, encompassing primarily commands and items on the maps. Similarly, abbreviations, swear words, and slang present in both datasets indicated that the task successfully elicited informal communication. Interestingly, Spanish turns were longer and displayed a wider range of morphologically complex forms. English, conversely, displayed strategies mostly absent in Spanish, such as the use of cardinal directions as a communicative strategy. We consider the online map task as a promising method for examining a variety of phenomena in applied linguistics research.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-21DOI: 10.1016/j.rmal.2023.100075
Tim Stoeckel , Tomoko Ishii , Young Ae Kim , Hung Tan Ha , Nam Thi Phuong Ho , Stuart McLean
Meaning-recognition and meaning-recall are two commonly-used test modalities to assess second language vocabulary knowledge for the purpose of reading. Although considerable variation in item format exists within each modality, previous research has examined this variation almost exclusively among meaning-recognition item types. This article reports on two exploratory studies, each comparing a fully-contextualized and a non-contextualized meaning-recall variant for one specific testing purpose: coverage-comprehension research. The fully-contextualized test utilized the same 622-word passage in each study. In the non-contextualized tests, target words appeared in short, non-defining sentences; in Study A, the elicited response was a translation of only the target item while in Study B, it was the entire prompt sentence. Scores on the compared tests differed significantly only in Study A. In both studies, the consistency with which the compared item formats yielded the same outcome (correct or incorrect) when the same target word was encountered by the same learner was rather low. The provision of relatively authentic context sometimes seemed to aid lexical inferencing, but other times it increased task difficulty relative to the limited-context formats. These findings suggest that different meaning-recall formats could lead to different conclusions regarding knowledge of specific words, and this could impact coverage-comprehension research findings.
{"title":"A comparison of contextualized and non-contextualized meaning-recall vocabulary test formats","authors":"Tim Stoeckel , Tomoko Ishii , Young Ae Kim , Hung Tan Ha , Nam Thi Phuong Ho , Stuart McLean","doi":"10.1016/j.rmal.2023.100075","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100075","url":null,"abstract":"<div><p>Meaning-recognition and meaning-recall are two commonly-used test modalities to assess second language vocabulary knowledge for the purpose of reading. Although considerable variation in item format exists within each modality, previous research has examined this variation almost exclusively among meaning-recognition item types. This article reports on two exploratory studies, each comparing a fully-contextualized and a non-contextualized meaning-recall variant for one specific testing purpose: coverage-comprehension research. The fully-contextualized test utilized the same 622-word passage in each study. In the non-contextualized tests, target words appeared in short, non-defining sentences; in Study A, the elicited response was a translation of only the target item while in Study B, it was the entire prompt sentence. Scores on the compared tests differed significantly only in Study A. In both studies, the consistency with which the compared item formats yielded the same outcome (correct or incorrect) when the same target word was encountered by the same learner was rather low. The provision of relatively authentic context sometimes seemed to aid lexical inferencing, but other times it increased task difficulty relative to the limited-context formats. These findings suggest that different meaning-recall formats could lead to different conclusions regarding knowledge of specific words, and this could impact coverage-comprehension research findings.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100075"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-15DOI: 10.1016/j.rmal.2023.100079
Nienke Meulman , Simone A. Sprenger , Monika S. Schmid , Martijn Wieling
ERPs (Event-Related Potentials) have become a widely-used measure to study second language (L2) processing. To study individual differences, traditionally a component outcome measure is calculated by averaging the amplitude of a participant's brain response in a pre-specified time window of the ERP waveform in different conditions (e.g., the ‘Response Magnitude Index’; Tanner, Mclaughlin, Herschensohn & Osterhout, 2013). This approach suffers from the problem that the definition of such time windows is rather arbitrary, and that the result is sensitive to outliers as well as participant variation in latency. The latter is particularly problematic for studies on L2 processing. Furthermore, the size of the ERP response (i.e., amplitude difference) of an L2 speaker may not be the best indicator of near-native proficiency, as native speakers also show a great deal of variability in this respect, with the ‘robustness’ of an L2 speaker's ERP response (i.e., how consistently they show an amplitude difference) potentially being a more useful indicator. In this paper we introduce a novel method for the extraction of a set of individual difference measures from ERP waveforms. Our method is based on participants’ complete waveforms for a given time series, modelled using generalized additive modelling (GAM; Wood, 2017). From our modelled waveform, we extract a set of measures which are based on amplitude, area and peak effects. We illustrate the benefits of our method compared to the traditional Response Magnitude Index with data on the processing of grammatical gender violations in 66 Slavic L2 speakers of German and 29 German native speakers. One of our measures in particular appears to outperform the others in characterizing differences between native speakers and L2 speakers, and captures proficiency differences between L2 speakers: the ‘Normalized Modelled Peak’. This measure reflects the height of the (modelled) peak, normalized against the uncertainty of the modelled signal, here in the P600 search window. This measure may be seen as a measure of peak robustness, that is, how reliable the individual is able to show a P600 effect, largely independently of where in the P600 window this occurs. We discuss implications of our results and offer suggestions for future studies on L2 processing. The code to implement these analyses is available for other researchers.
{"title":"GAM-based individual difference measures for L2 ERP studies","authors":"Nienke Meulman , Simone A. Sprenger , Monika S. Schmid , Martijn Wieling","doi":"10.1016/j.rmal.2023.100079","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100079","url":null,"abstract":"<div><p>ERPs (Event-Related Potentials) have become a widely-used measure to study second language (L2) processing. To study individual differences, traditionally a component outcome measure is calculated by averaging the amplitude of a participant's brain response in a pre-specified time window of the ERP waveform in different conditions (e.g., the ‘Response Magnitude Index’; Tanner, Mclaughlin, Herschensohn & Osterhout, 2013). This approach suffers from the problem that the definition of such time windows is rather arbitrary, and that the result is sensitive to outliers as well as participant variation in latency. The latter is particularly problematic for studies on L2 processing. Furthermore, the size of the ERP response (i.e., amplitude difference) of an L2 speaker may not be the best indicator of near-native proficiency, as native speakers also show a great deal of variability in this respect, with the ‘robustness’ of an L2 speaker's ERP response (i.e., how consistently they show an amplitude difference) potentially being a more useful indicator. In this paper we introduce a novel method for the extraction of a set of individual difference measures from ERP waveforms. Our method is based on participants’ complete waveforms for a given time series, modelled using generalized additive modelling (GAM; Wood, 2017). From our modelled waveform, we extract a set of measures which are based on amplitude, area and peak effects. We illustrate the benefits of our method compared to the traditional Response Magnitude Index with data on the processing of grammatical gender violations in 66 Slavic L2 speakers of German and 29 German native speakers. One of our measures in particular appears to outperform the others in characterizing differences between native speakers and L2 speakers, and captures proficiency differences between L2 speakers: the ‘Normalized Modelled Peak’. This measure reflects the height of the (modelled) peak, normalized against the uncertainty of the modelled signal, here in the P600 search window. This measure may be seen as a measure of peak robustness, that is, how reliable the individual is able to show a P600 effect, largely independently of where in the P600 window this occurs. We discuss implications of our results and offer suggestions for future studies on L2 processing. The code to implement these analyses is available for other researchers.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1016/j.rmal.2023.100080
Daniel R. Isbell , Young-A Son
This tutorial introduces explanatory item response models for the purpose of validating instruments used in applied linguistics research. These models can be used to investigate how theoretically important, construct-relevant characteristics of items influence the difficulty associated with successful responses, which constitutes valuable validity evidence. We focus on two item-explanatory models, the linear logistic test model for dichotomous responses and the linear rating scale model for ordinal responses, in the context of elicited imitation tests commonly used to measure oral proficiency in L2 research. Examples are provided using open data and the open-source statistical software R.
{"title":"Explanatory item response models for instrument validation: A tutorial based on an elicited imitation test","authors":"Daniel R. Isbell , Young-A Son","doi":"10.1016/j.rmal.2023.100080","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100080","url":null,"abstract":"<div><p>This tutorial introduces explanatory item response models for the purpose of validating instruments used in applied linguistics research. These models can be used to investigate how theoretically important, construct-relevant characteristics of items influence the difficulty associated with successful responses, which constitutes valuable validity evidence. We focus on two item-explanatory models, the linear logistic test model for dichotomous responses and the linear rating scale model for ordinal responses, in the context of elicited imitation tests commonly used to measure oral proficiency in L2 research. Examples are provided using open data and the open-source statistical software R.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-07DOI: 10.1016/j.rmal.2023.100077
Manuel F. Pulido , Kathy Conklin
Progress in vocabulary learning is often not immediately apparent in behavior, posing limitations to our understanding of ongoing development. One potentially fruitful approach lies in the use of EEG to examine brain potentials, which may reveal changes in vocabulary acquisition that precede behavioral outcomes. Until recently, however, the word-by-word presentation of EEG paradigms was not compatible with natural reading experiments. The simultaneous recording (or “co-registration”) of eye movements and EEG can advance our current understanding of how vocabulary processing and learning unfolds in real time during reading, e.g., indicating when a sensitivity to the form and meaning components of vocabulary emerges. Despite advances of co-registration in allied fields, its potential in vocabulary studies is still unrealized. This may be due to uncertainty on how to design studies that appropriately match the method with research questions. We provide some methodological guidance and provide suggestions to help kick-start co-registration in vocabulary research.
{"title":"Realizing new potential in vocabulary studies: Co-registration of eye movements and brain potentials","authors":"Manuel F. Pulido , Kathy Conklin","doi":"10.1016/j.rmal.2023.100077","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100077","url":null,"abstract":"<div><p>Progress in vocabulary learning is often not immediately apparent in behavior, posing limitations to our understanding of ongoing development. One potentially fruitful approach lies in the use of EEG to examine brain potentials, which may reveal changes in vocabulary acquisition that precede behavioral outcomes. Until recently, however, the word-by-word presentation of EEG paradigms was not compatible with natural reading experiments. The simultaneous recording (or “co-registration”) of eye movements and EEG can advance our current understanding of how vocabulary processing and learning unfolds in real time during reading, e.g., indicating when a sensitivity to the form and meaning components of vocabulary emerges. Despite advances of co-registration in allied fields, its potential in vocabulary studies is still unrealized. This may be due to uncertainty on how to design studies that appropriately match the method with research questions. We provide some methodological guidance and provide suggestions to help kick-start co-registration in vocabulary research.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100077"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.rmal.2023.100078
Julia Goetze
Vignette methodology offers promising opportunities for the investigation of situation-specific dynamic (affective) variables, such as emotions and cognitions, in language learning, teaching, and use. Despite their frequent application in adjacent fields like education, sociology, and psychology, the use of vignettes and vignette methodology in applied linguistics is still scarce. Using emotion research as an example, this tutorial introduces applied linguists to vignette methodology, outlines a vignette design framework, illustrates administration strategies, and highlights its ability to bolster methodological consistency and research-practitioner collaborations across various research paradigms. The tutorial aims to advance the use of vignette methodology in applied linguistics by illustrating its potential to strengthen the validity of research findings, as well as its benefits regarding the efficiency of data collection, participant recruitment, and research costs.
{"title":"Vignette methodology in applied linguistics","authors":"Julia Goetze","doi":"10.1016/j.rmal.2023.100078","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100078","url":null,"abstract":"<div><p>Vignette methodology offers promising opportunities for the investigation of situation-specific dynamic (affective) variables, such as emotions and cognitions, in language learning, teaching, and use. Despite their frequent application in adjacent fields like education, sociology, and psychology, the use of vignettes and vignette methodology in applied linguistics is still scarce. Using emotion research as an example, this tutorial introduces applied linguists to vignette methodology, outlines a vignette design framework, illustrates administration strategies, and highlights its ability to bolster methodological consistency and research-practitioner collaborations across various research paradigms. The tutorial aims to advance the use of vignette methodology in applied linguistics by illustrating its potential to strengthen the validity of research findings, as well as its benefits regarding the efficiency of data collection, participant recruitment, and research costs.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100078"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49750768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-29DOI: 10.1016/j.rmal.2023.100074
Irina Elgort , Aaron Veldre
Eye-movement studies investigating second language (L2) word processing during reading are growing exponentially. However, what information L2 readers are able to process parafoveally is a less researched topic. The gaze-contingent boundary paradigm (Rayner, 1975) allows researchers to manipulate visual information in an upcoming word during reading, tapping into real-time word processing without awareness. This article provides an overview of experimental studies of parafoveal word processing in reading, followed by a methodological review of the use of the boundary paradigm in L2 and bilingual research. We synthesize key methodological details (including preview type, eye-movement measures) and findings of 15 experiments that met our search criteria, concluding that the parafoveal preview effect observed when reading in the first language is also present in L2 reading. We propose how the gaze-contingent boundary paradigm can be used to study L2 lexical knowledge and factors that affect its development. Finally, we provide advice and instructions for designing and conducting boundary paradigm experiments.
{"title":"Word processing before explicit attention: Using the gaze-contingent boundary paradigm in L2 reading research","authors":"Irina Elgort , Aaron Veldre","doi":"10.1016/j.rmal.2023.100074","DOIUrl":"https://doi.org/10.1016/j.rmal.2023.100074","url":null,"abstract":"<div><p>Eye-movement studies investigating second language (L2) word processing during reading are growing exponentially. However, what information L2 readers are able to process parafoveally is a less researched topic. The <em>gaze-contingent boundary paradigm</em> (Rayner, 1975) allows researchers to manipulate visual information in an upcoming word during reading, tapping into real-time word processing without awareness. This article provides an overview of experimental studies of parafoveal word processing in reading, followed by a methodological review of the use of the boundary paradigm in L2 and bilingual research. We synthesize key methodological details (including preview type, eye-movement measures) and findings of 15 experiments that met our search criteria, concluding that the parafoveal preview effect observed when reading in the first language is also present in L2 reading. We propose how the gaze-contingent boundary paradigm can be used to study L2 lexical knowledge and factors that affect its development. Finally, we provide advice and instructions for designing and conducting boundary paradigm experiments.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"2 3","pages":"Article 100074"},"PeriodicalIF":0.0,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}