Professor Lourdes Ortega (Georgetown University)
A Transdisciplinary Agenda for Language Testing? Nudges from a Multilingual SLA Perspective
Fifteen scholars from the field of second language acquisition (SLA) united around a collective position paper published last year (Douglas Fir Group, 2016). We acknowledged that compelling changes have taken place in the nature of language learning and teaching, fueled by sweeping world patterns in technology, globalization, and mobility. We sought to articulate a framework for how the field of SLA might respond to such profound changes, explicitly reframing the study of language learning and teaching as inseparable from people’s emergent multilingual lives. We concluded that SLA must make commensurate changes in its disciplinary research agendas.
In this talk, I will nudge the ALTE audience to reflect on the corollaries of this proposal for the field of language testing. First I will examine that transportability from SLA to language testing of two key features of the vision: transdiciplinarity and multilingualism. Transdisciplinarity calls for risk-tolerant traveling in and out of the boundaries of fields in order to generate socially useful knowledge that is more than just the sum of disciplines. How can the field of language testing engage in transdisciplinary exchanges, given that it is often seen – at least by outsiders – as expertise driven and highly technical? Multilingualism requires a reimagining of both “language” and “competence.” What roles can language testers play in inspiring new imagined yet measurable contents for these two constructs, which have always been central to the daily activities of the language testing field? I will then take up possible contributions of the new transdisciplinary ethos for language testing across the three layers proposed by the Douglas Fir Group: the micro level of social activity, the meso level of sociocultural institutions and communities, and the macro level of emotionally imbued ideological structures. What might it take for the various communities of language testers to embrace the challenge to measure – and to debunk mismeasuring – complex multilingual abilities for use that are impacted simultaneously at all three levels? What might language testing look like in the future, if it is informed by a semiotic understanding of communicative abilities and an emotional-cognitive-social understanding of multiple-language learning?
I hope to show that the field of language testing is uniquely positioned and equipped to help illuminate the emergent competencies of multilingual people’s across their languages. The ability of the field to make a positive impact in today’s world depends crucially on the development of a transdisciplinary agenda that carries into the long-term future. Language testing and SLA can be effective allies in this enterprise.
Professor Jan Hulstijn (University of Amsterdam)
Construct and measurement of language proficiency: perspectives from BLC Theory and corpus linguistics
In the first part of the presentation, I will take the audience on a theoretical, philosophical flight, high above the daily worries of educational policy and language assessment. As Karl Popper (1959) said, scientific inquiry is a matter decreasing ignorance. Language-proficiency theories and accompanying empirical research of the past thirty years have somewhat decreased our ignorance. BLC Theory (Hulstijn, 2015) defines the notion of language proficiency in terms of two independent dimensions: (i) basic language cognition (BLC) versus higher language cognition (HLC), and (ii) core versus periphery. By doing so, BLC Theory brakes down the notion of “native speaker” in extralinguistic terms (e.g., age, level of education) and in linguistic terms. Perhaps “the” native speaker only exist in what all native speakers have in common (BLC), not in the many domains and levels where they differ (HLC). BLC Theory thus offers a window on the role of language in literate societies. In my current empirical work, I am discovering that the size of BLC is fairly large, perhaps roughly as large as B1 in the CEFR.
After this theoretical flight at high altitudes of abstractness, I will try to safely land on the solid soil of educational practices. I will propose that the linguistic flesh to the bones of the six levels of the CEFR should consist, in its core, of an intimate integration of vocabulary and grammar, rather than separate lists of words and grammatical constructions. Corpus linguists should help us define, probabilistically, the CEFR levels in terms of this lexico-grammar. I will end the presentation with several propositions, which may form the input for a discussion during the afternoon workshop.
Professor Constant Leung (King’s College London)
Language assessment for social affiliation?
Educational assessment in many parts of the world has taken a pro-learning stance in the past 15 years or so. Jurisdictions as far apart as Hong Kong, New Zealand and Scotland have adopted formative assessment policies that are designed to promote student learning. High(er) quality learning is generally regarded as a desirable outcome in education. Seen in this light, there has been a palpable move towards putting assessment at the service of wider educational and social goals.
In this talk I will examine how far the prevailing fundamental concepts in additional/second language assessment with particular reference to linguistic minority students (e.g. assessing English as an additional language in the UK or Flemish/French as an additional language in Belgium) are part of this development. It will be argued that many of the assumptions embedded in additional language assessment are largely concerned with abstracted functional-transactional language use. In the contemporary demographic and socio-cultural conditions in Western Europe (and elsewhere), language assessment should also embrace situated language practices that can take account of students’ personal investments in language use. Taking account of recent work in language education, multilingualism and translingualism (e.g. the Douglas Fir Group, 2016; García and Li Wei, 2014), I will provide a sketch of a language assessment approach that can contribute to the fostering of local identities and social affiliation.
Bart Deygers (KU Leuven)
Just Testing. Applying theories of justice to high-stakes language tests.
Is it just for a university to demand that international L2 students meet language requirements that are not met by all L1 students, who are exempt from taking the test? Is it just for a country to raise the language requirements for citizenship to a literacy level that de facto excludes people who have not had access to organized education or schooling?
These are but a few moral and ethical considerations that language testers and policy makers grapple with today. To date, however there have been relatively few attempts at formulating principles of justice that could apply specifically to language testing. Yet, in a socio-political climate where high-stakes language requirements abound, it is important to have a clear concept of what just testing means and how it can be operationalized.
During the presentation I propose six principles of justice for test developers and score users. These principles are based on theories of distributive justice that focus on human rights, fairness, equal opportunity, and dignity. The overarching aim of this presentation is to advance the debate on justice, and to provide a consistent way of considering ethical and moral dilemmas that language testers and policy makers face.
Professor James Simpson (University of Leeds)
Language assessment for adult migrants: Issues and implications
This presentation is about language assessment for bilingual adults in migration contexts, particularly those with little or no formal educational experience. I begin with an overview of key issues and a summary of current research on high-stakes L2 testing for adult migrants, including discussion of the use of language tests (actual or de facto) for citizenship and naturalisation purposes.
This is pertinent at a time of large-scale high-stakes testing of adult learners who are migrants to the global north and west. What are the implications of such testing, in the lives of adult migrants? I then turn specifically to the assessment of speaking skills, in a study of adult learners of English for speakers of other languages (ESOL) in the UK. I employ the notions of knowledge schema and frame in discourse to highlight two areas of interest in testing the speaking skills of these students: divergent interpretations of the test event by learners; and variation in interlocutor behaviour. What are the implications for testing speaking, of the findings from this study?
Professor Kris Van den Branden (KU Leuven)
Energy for learning? The impact of assessment on learning in compulsory education
Assessment should be considered an integral part of the range of pedagogical activities that teachers develop in the classroom and which tend to have a strong impact on students’ learning motivation and growth. Both from a cognitive and socio-emotional perspective, assessment strongly influences student learning. In this presentation I will explore the many ways in which assessment can be shown to have a positive or negative influence on the effort and mental energy students invest in learning at school, and on the learning that results.
Steven Vanhooren & Maya Rispens (Dutch Language Union, The Hague)
International policy and cooperation: Opportunities for testing language proficiency worldwide (Parallel workshop)
Dutch is the official language in Flanders (Belgium) an d the Netherlands. Since 1980, the Dutch and Flemish governments have set shared policy with respect to the Dutch language. They confirmed their cooperation in the Treaty concerning the Dutch Language Union.
A major outcome of this cooperation is a shared policy concerning testing (and certifying) of language proficiency in Dutch as a foreign language. The cooperation offers unique opportunities, both in content and finances, that would not occur if Flanders and the Netherlands pursued a separate examination policy. In this session, the shared policy between Flanders and the Netherlands on testing language proficiency is discussed, as well as the opportunities it takes along for learners of Dutch as a foreign language in more than 40 countries worldwide.
Professor Cecilie Hamnes Carlsen (Høgskulen på Vestlandet, Kompetanse Norge)
Professor Jeanne Kurvers (Tilburg University)
Giving low educated learners a fair chance: reducing the negative impact of high stake testing on low educated L2-learners (Parallel workshop)
Adult learners with limited schooling and low levels of literacy have long formed part of the immigrant population. This is only natural given the fact that for many, war and conflict in the home country is both the direct cause of limited schooling and low levels of literacy, and the reason why people apply for shelter in safer countries.
Due to a growing tendency towards more and stricter language requirements in Europe, low-literate learners and refugees alike, now need to pass language tests to gain democratic rights, like citizenship, and even human rights, like permanent residency, family reunification and housing. It is therefore urgent that the professional community of language test developers and -researchers take this easily marginalized group into account in the construction and validation of standardized language tests (Carlsen, 2015, 2016).
Low-literate learners are an understudied population in SLA-research, and what we know about second language acquisition is almost exclusively based on research on highly literate, often highly educated learners (Young-Scholten 2007; Tarone et al 2009, 2010; Allemano 2013). Research in cognitive psychology however, has shown that literacy affects the way the mind works, and in particular, the way we process language (Vinogradov 2011; Kurvers 2002; Kurvers, van de Craats & van Hout 2014). Low literate learners perform lower on cognitive tests in general, and on verbal tests in particular (Ostrosky-Solis et al. 1998; Ardila et al. 2010). This may be due to a lack of print literacy on the one hand and on a lack of test literacy on the other. Several scholars have pointed to the fact that low-literate learners’ lack of familiarity with the testing situation is bound to affect their scores on tests (Allemano 2013; Mishra, Singh & Pandey 2012)
The aim of this workshop is twofold: First, we want to present some research into how low-educated learners learn a new language, how they differ from educated learners in how they perform on cognitive tests and language tests and in the way they process language. Second, we will use these research results as a starting point for a discussion of language test tasks that would be more or less appropriate to include in a language test where part of the population is low literate.
Professor Constant Leung (King’s College London)
Assessing additional language performance: What can language assessment descriptors tell us? (Parallel workshop)
In many education systems teachers are expected to work with prescribed or statutory assessment frameworks and rating scales that apply to all students. This approach can raise questions of usefulness and validity in educational settings with linguistically diverse students.
In this workshop I will provide participants with an opportunity to compare two sets of assessment descriptors: mainstream English (school subject) and English as an additional language, and to trace their underlying language models. The main aims are (a) to identify the similarities and differences between these sets of descriptors in terms of knowledge and skills, (b) to reflect on the curricular and language concerns embedded in the descriptors, and (c) to explore aspects (if any) of language knowledge and use that may be missing from the point of view of additional language learning and use. There will be opportunities for group discussion and hands-on activities.
Professor James Simpson (University of Leeds)
Literacy and speaking tests for adult migrants. (Parallel workshop)
In this workshop participants will explore issues relating to the testing of literacy and oral communication skills for adult migrants. Using video and hands-on work with data, we will examine alternatives to standardised tests for literacy, and consider in more depth the implications for the testing of oral interaction raised in the plenary session.
The session will be framed around McNamara and Ryan’s (2011) questions about fairness and justice which should be asked of any language test for migrants: Does it test what it should? Should it test what it does?