Wednesday, April 14, 2010

Learner Autonomy and Tandem Learning: Putting Principles Into Practice...(Schweinhorst, 2003)

In his paper, Schweinhorst (2003) focusses on three different yet interdependent perspectives to learner autonomy within the framework of CMC:

1. The individual-cognitive perspective emphasizes reflection and awareness and how they facilitate learners to constantly improve their own construct system. Validating tools such as questionnaires can help them to evaluate their learning.

2. The social-interactive perspective emphasizes meaningful interactions with native speakers and peers through, for example, project work. Interactions that involve both scaffolding and feedback (especially written feedback) helps learners to develop language and linguistic awareness.

3. The experimental-participatory perspective emphasizes the fact that learners should be given control over their learning, by experimenting with cognitive tools such as authoring tools that can help learners to raise their awareness of language.

Schweinhorst also talks about one example of learner autonomy principles in practice, known as tandem learning, which involves the pairing up of two learners of complementary L1-L2 combinations so that they can learn from each other. Based on his experience of running a tandem e-mail project, the author notes that problems do arise with this setup, especially when communication is conducted via the internet using commercially-produced e-mail clients. A far more effective setup is to have a dedicated web site exclusively for tandem-learning partners such as the one that was set up by Appel and Mullen (2000) called the Electronic Tandem Resources (ETR). Apart from having a superior organizational structure, the ETR contains a variety of useful tools for the learner, including one that measures the quantity of L1 and L2 content in each e-mail message that is written and sent.

The author also talks about his experiences in running tandem MOO projects, where tandem partners communicate synchronously. Despite taking a number of measures to maximize the project’s success, such as ensuring that learners were given choices, the project was initially beset by problems of a practical as well as technical nature. But after introducing a few changes to the project structure, such as the inclusion of manageable task-based work leading to well-defined short-term goals as well as web-based dictionaries, the outcomes were much improved.

Finally, the author draws from a framework of pressures, affordances and potentials – three areas in which a combination of technology and pedagogy will affect reflective processes – to effectively evaluate medium- and pedagogy-specific factors in CMC, as well learner and teacher roles.

This article reminds us once again that effective CALL development relies on both technology and pedagogy; neither one should be neglected at the expense of the other, since they are interconnected in highly complex ways. Oh, and don't forget about the theory too.

Saturday, March 27, 2010

A reflective summary of ‘Technology in testing: the present and the future (J. Charles Alderson)

In this article, Alderson (2000)

i. reviews the pros and cons of computer-based language tests (CBTs);

ii. explores developments in Web-based testing, citing examples such as the Educational Testing Services’ computer-based TOEFL and the large-scale diagnostic testing project, DIALANG; and

iii. outlines a research agenda for future studies

According to the author, there are technical, administrative and pedagogical advantages of using CBTs over paper-and-pencil tests. Some of these include immediate student feedback, personalized testing, increased options for test administration, the possibility of storing enormous amounts of data for research purposes, and increased test security through test item randomization.

On the flip side, notable disadvantages include the possibility of bias against computer illiterate individuals, limitations in technology which do not allow for accurate assessment of productive language skills, and limited choice of test formats (e.g. multiple choice and gap-fill) which can lead to decontextualized forms of testing.

It seems that the advantages are great enough to warrant the development and use of computer-based tests in the field of EFL; however, at the time when Alderson’s (2000) article was written, very few attempts had been made by test writers to develop value-added web-based assessments. CBT TOEFL was one of those, which, according to the author, contained a number of innovative features but offered no evidence of value-addedness in the eyes of test candidates. On the other hand, DIALANG, which was, and still is, being used for only non-certification purposes, is rather innovative. It can assess proficiency in reading, writing, listening, vocabulary and grammar in 14 different European languages.

The author concludes his article by posing a list of intriguing questions for future researchers to mull over. The ones that evoke my curiosity the most are, “What does the provision of support imply for the validity of the tests, and for the constructs that can be measured?” and “What is the value of allowing learners to have a second attempt, with or without feedback on the success of their first attempt?” For a minute, let’s just forget about the implementation of computer-based testing. In the context of Hong Kong classrooms, I would imagine that ‘self-assessment’ is, relatively speaking, an unknown practice and it is going to remain this way for the long haul that’s for certain. But if it were ever made prevalent, could you imagine the scenario where a test taker is given a second chance to answer the items which he got wrong in his or her test paper the first time round – not only this, but with teacher support thrown in as well? How would the second set of test results be used - would they be used as part of the overall grades required for entry into a secondary school or a tertiary institution. If yes, how would the candidates feel about this? If no, I wonder what proportion of students would be bothered about the results after going through self-assessment? Surely, ‘normal’ or ‘high stakes’ assessments will only ever be conducted in the traditional manner (fixed time and location, paper-pencil format, one-paper-fits-all, etc.) whereas online assessment will always be reserved for ‘low stakes’ or diagnostic purposes, which may not be considered by some candidates as being as important or essential. Would anyone care to disagree?

Friday, March 26, 2010

A reflective summary of the article 'Can learners use concordance feedback for writing errors?' (Gaskell and Cobb, 2004)

In Gaskell and Cobb’s (2004) study, 20 low-intermediate level students enrolled in a 15-week English writing course at a university in Canada learnt how to use and practiced using an online concordancer to make corrections to the errors they made in their own written assignments. These errors, which were of a grammatical and collocational nature, were made explicit by the instructor during the initial period of training and students simply had to click the HTML links for the concordance before revising the errors. Following the training, the instructor stopped providing online concordance links.

The researchers aimed to answer the following questions:
i. Will learners consider the concordancing activity useful?
ii. Can learners use concordances to correct their errors?
iii. Will correcting with concordances reduce errors in free production?
iv. Will learners use concordances independently following training?

Several sources of data were collected, including pre-test and post-test writing samples, weekly error analysis forms, results of a student survey regarding attitudes toward the concordancing activity and network records of issuing IP addresses for concordance searches.

The main findings were that
- all the students felt that their writing skills had improved, although only 8 students attributed the improvement in their ability to use the grammar points targeted in the course to the corpus consultation work itself;
- students could independently work from concordance to correction even though this process did not necessarily help reduce the number of errors they made in their post-test writing samples.

Gaskell and Cobb’s (2004) study shows that using concordancing in writing classes can help provide feedback to students on word-level and sentence-level errors. This is especially important for second language learners, who do not have opportunities nor time to learn, as the authors put it, "through enormous amounts of brute practice in mapping meanings and situations to words and structures". They argue that such learners need to explore language through data-driven learning, which essentially means learning from exposure to examples containing repeated patterns which are made salient, as it provides opportunities for substantial amounts of practice on target errors which otherwise would only be met once in a while.

Although the participants in Gaskell and Cobb's (2004) study were adult literacy learners, the article that they have written inspires me to consider trying out a similar concordancing activity with my primary six students sometime in the near future. Nine times out of ten I would prefer to have students correct their own errors rather than do the corrections for them myself and online concordancers offer a gateway for me to achieve this. Many of my students are already actively blogging in English on a regular basis so it would be simply be a matter of having them copy and paste their writing into a Word Document and send them to me (or can I add precast links next to erroneous sentences in their blog postings??? - I don't think this is possible myself). I would imagine, however, that they would require more support, especially in terms of the amount of training and practice that they would need to master retrieving and interpreting concordance information. I would imagine also that pre-cast links would have to be provided indefinitely because many of my students have not reached the stage where they can easily locate and point out mistakes in their written compositions.

Thursday, March 25, 2010

Ugliest / Worst Web Pages of the Decade

Let's hope that all our soon-to-be-created web pages turn out to be better or at least better-looking than these, hey?

Friday, March 19, 2010

My thoughts on the 'Check My Words' toolbar

Earlier today I downloaded the 'Check My Words' toolbar to my computer with no problems. Once you have opened up Microsoft Word (2007 version) you can find it by clicking the Add-ins tab at the top of the screen.

To put 'Check My Words' to the test, I first typed up one of my students' pieces of writing which he had done recently, using Microsoft Word. In his writing task, the student (primary six level) was asked to respond to the question, "What would you do with HK$500 if it were given to you for your birthday?" with an essay of at least thirty words. This is what he came up with:

If I have HK$50 I think I will buy the mp3 give my friend Louis because he like listen music. I with buy the Harry Potter because I have Harry Potter one, two, three, four, five but I not have Harry Potter six so i buy it.

Imagining that I were in my student's shoes, I highlighted certain words which I felt I would need help on and clicked on the 'check' tool. For example, when I highlighted 'like' and clicked on 'check', a comprehensive list of common and potential errors pertaining to the use of this word appeared on the left hand side of the information screen. The only trouble is, if I did not know (and my student would definitely not know) the grammatical terms 'subject-verb agreement'. 'infinitives and gerunds' then I would have to wade through all this and other information, spending a huge amount of time to pick out what was necessary in order for me to be able to correct my written phrase 'he like listen music'.

The 'Say It!' tool is much more useful for primary level students. When I highlighted the whole essay and clicked 'Say It!', an American-sounding speaker accurately reproduced the words remarkably smoothly. I suppose it would be useful to use the tool during occasions where students have to read aloud essays, poems and so forth and need to know the exact pronunciation of certain words, phrases or even sentences in some cases. Just a minor point - I like the way the speaker pauses each time there is a comma in the text (contrast 'Mum said I am clever.' with 'Mum said, "I am clever.") Pity it doesn't work so well with exclamation marks.

The 'Definitions' tool provides a list of all the possible meanings of a highlighted word, drawn from various online dictionaries (you can choose which one). A link to an online bilingual dictionary would be more useful for primary six students in my opinion.

The 'Similar Meanings' tool is like a thesaurus - great for providing students with lists of alternatives to words that they have written (synonyms). It displays the opposites of those words (antonyms) as well.

The 'Word Family' tool tells you the parts of speech of a highlighted word and its related family of words (e.g. like has likes, liked, unlike, liking, etc.) as well as a rating that indicates its importance or how common it appears in writing or speech.

The 'Word combinations' tool can benefit students' writing immensely because it tells them whether a string of words can be used together. After highlighting 'like listen' in the essay above and clicking the tool, I was presented with eight example sentences containing 'like listening' on the Word Neighbors webpage. This is more than enough to convince me that my written combination of words needs correcting. A useful link on the webpage is an embedded English-Chinese translation tool.

I have nothing to say about the 'Example Sentences' tool as it is self-explanatory.

It appears, then, that the 'Check My Words' toolbar has been developed with university-level students in mind, though I can see that some of the inbuilt tools can be used even with upper primary level students. It goes without saying that if you are going to introduce your students to the various resources available in the 'Check My Words' toolbar, you'd better make sure that they can use Microsoft Word comfortably. In my case, I very much doubt that my primary six students have any typing experience whatsoever (in English, that is), let alone have experience of using Microsoft Word.

I can, however, imagine how the tools can be of enormous benefit to advance level students at college or university level, particularly those who are enthusiastic about grammar!

Saturday, March 6, 2010

My response to the article 'Practical considerations for multimedia courseware development: An EFL IVD experience' by Hsien-Chin Liou

One thing that stood out for me after reading this article is where the author stresses the importance of knowing

  • the media (particularly its merits)
  • the institutional needs or constraints (which includes the learners), and
  • the design principles (which include the knowledge of teaching pedagogy and language learning theories)

when making design considerations for her multimedia courseware development project.

Put it simply, the three main factors that need to be considered in CALL development projects are the learner, the technology and the language learning theory. Thus, in contrast to what we read in another article written by Levy (1997) a few weeks back, there are, in fact, not only two choices to consider for the point of departure for CALL development (namely, theory and technology) but three (learners' needs as well). It makes sense to begin by analyzing the needs of our learners and this analysis should be both informed by our knowledge of what the technology do for the learners and grounded in second language acquisition theories.

Another point I'd like to mention, or rather, question about is that if courseware development involving multimedia is so labour intensive and time consuming, where does that leave us in our web projects? Liou's article was published in 1994; I'd like to know whether, sixteen years down the line, there exists an authoring package which takes far less time for novice programmers to master than IconAuthor takes. And what about those pre-manufactured 'templates' that help language teachers to implement their courseware materials easily? Are they more readily available nowadays?

Friday, February 19, 2010

Evaluating CALL courseware: My thoughts on Hubbard's (1988) framework

Just as I was getting to grips with the some of the ideas presented in Levy's (1997) article regarding the CALL courseware development process that we read about in Week 5, I came across the word 'evaluation' in the title of this week's reading article and the first thing that came to mind was "Oh no, not now, please." (lol) You see, up until a little while ago, I still hadn't decided which piece of technology or courseware would form the focus of my EN6482 assignment let alone think about how I would develop and implement it. So, to think about 'evaluation' seemed like, at first, a step ahead too far for me. Later I realized, however, that I had misinterpreted 'evaluation' as meaning the evaluation of students' learning as result of CALL courseware implementation instead of the courseware itself. (Obviously I should have read the title of Hubbard's article more carefully!).

Let's be honest, when it comes to evaluating CALL courseware, no evaluation scheme can possibly be more comprehensive and more flexible than the evaluation framework that was put forward by a linguistics expert from Stanford University named Philip Hubbard (1988) more than two decades ago. It is comprehensive in that it contains sections that cover every possible angle as far as the evaluation of computer-assisted language learning and teaching is concerned, including 'operational description', 'learner fit' and 'teacher fit', all of which themselves have a number of distinct components which need to be looked at in any courseware evaluation procedure. The evaluation framework is also flexible because, as Hubbard explains, it provides the tool through which the courseware evaluator can create his or her own questions or build some other evaluation scheme according to the evaluator's needs.

Some courseware evaluators may be put off by the apparent complexity of Hubbard's evaluation framework (with all those arrows and boxes drawn in) and may also harbour worries about the the length of time it takes to evaluate a single courseware package. "Fear not," I would say to them, for the author provides the reassurance that it is only those that need to compose reviews of a package that doesn't appear to be suitable who need to go through the full evaluation procedure. Ordinary teachers like you and me just need to use the framework as a guiding tool to quickly weed out courseware packages that do not fit the bill. As Hubbard's mentions, even if we just address the question of whether the courseware fits our students' needs and interests, it will go a long way toward making an informed decision. Flexible it is indeed.