Language testing and assessment
Research in the area of language testing and assessment is primarily related to the development and maintenance of local language tests and assessment tools that are embedded in specific contexts. The roles of local expertise (e.g., teachers) and the local variables (e.g., context, policies, language background) are at the core of the research.
Our research in the area of language testing and assessment focuses on design and development of rating scales and instruments, as well as analysis of rater training and behaviour in for performance-based assessments for L2 speaking and communication. Language assessment literacy is at the core of our research interests.
As part of our research activities, we also examine the use of language tests and assessment as instruments of language policy and internationalisation in higher education. We investigate the uses and consequences of language tests for different stakeholders, as well as the educational context and the society.
Much of our research deals with the development and validation of an oral English proficiency test for certification of university lecturers to teach in English medium instruction (EMI), called TOEPAS. The research team has extensive experience of working on test design and development, including the design and creation of test specifications, and test analysis.
Linking the TOEPAS with CEFR
Funded by ERASMUS+ (2017-2020)
The purpose of this project was to link the Test of Oral English Proficiency for Academic Staff (TOEPAS) to the Common European Framework of Reference (CEFR) with the goal of increasing the TOEPAS result transparency for cross-institutional and transnational use. Using the methodologies and activities outlined in the manual Relating Language Examinations to CEFR, published by the Council of Europe (2009), the linking procedure involved an international panel of 12 judges and consisted of four distinct stages: familiarization, specification, standardization, and validation. This procedure helped to provide empirical evidence for the establishment of the minimum English language proficiency level needed for teaching in English medium instruction programs.
You can find more information about the Erasmus+ TAEC project at the CIP website under "Projects and collaborations".
Language assessment literacy of language teachers
Funded by the Danish National Centre for Foreign Languages (NCFF) (2020-2022)
The overarching goal of this project is to measure the language assessment literacy (LAL) of language teachers in primary and secondary education. Continuous formative assessment has been at the core of Danish education, where teachers have the responsibility to monitor students’ progress and achievement of the common educational goals. Given language teachers’ responsibility for conducting, using, and mediating results from both classroom and national assessments, the LAL level among teachers is crucial for maintaining reliable and valid assessment. LAL refers to the knowledge that stakeholders need to have in order to participate in language assessment activities.
Analyses based on classical test theory and Rasch are routinely performed as part of the development and maintenance of TOEPAS. These include analyses of the scale, rater, and test-taker behavior, as well as alignment and standard-setting. Mixed-method and case study approaches have been applied in the analysis of assessment consequences, stakeholder perceptions and cognition, and rater cognition
CIP’s researchers have participated in research and consultancy on test development and implementation projects. Some examples include development analysis of foreign language tests for young learners in the Danish obligatory education, development and analysis of test items for the school leaving foreign language exams in 9th or 10th grade, analysis of a placement test for Danish as a second language students in secondary education.
We provide consultancy regarding development and implementation of local instruments for English language certification of English-medium instruction lecturers at non-Anglophone universities. Our testing team has collaborated with the University of Nantes regarding the establishment of a testing center for lecturers.
We believe that involvement of different stakeholders (management, administrators, teachers, students) in test development, implementation, and validation is crucial for the establishment of ethical and valid assessment practices. Therefore, we invite collaborations on research in all areas of language assessment.
Dimova, S., Yan, X., & Ginther, A. (2020). Local Language Testing: Design, Implementation, and Development. Oxon: Routledge. https://www.routledge.com/Local-Language-Testing-Design-Implementation-and-Development/Dimova-Yan-Ginther/p/book/9781138588493
Dimova, S. (2020). Language Assessment of EMI Content Teachers: What Norms. In M. Kuteeva, K. Kaufhold, & N. Hynninen (Eds.), Language Perceptions and Practices in Multilingual Universities (pp. 351-378). Cham: Palgrave Macmillan.
Dimova, S., & Kling, J. (2018). Assessing English-Medium Instruction Lecturer Language Proficiency Across Disciplines. TESOL Quarterly, 52(3), 634-656.
Dimova, S. (2017). Life after oral English certification: The consequences of the Test of Oral English Proficiency for Academic Staff for EMI lecturers. English for Specific Purposes, 46, 45-58. https://doi.org/10.1016/j.esp.2016.12.004
Dimova, S. (2017). Pronunciation Assessment in the Context of World Englishes. In O. Kang, & A. Ginther (Eds.), Assessment in Second Language Pronunciation (pp. 49-66). New York: Routledge.