Tag Archives: Reflection

TBVGBTS: Teaching Grammar/Lexical Chunks

A word of warning: if you’re looking for clear and definite answers about whether we should teach discrete items of grammar and/or lexis in this post – or anywhere for that matter – I both fear for your sanity, and suspect you will be disappointed. However, if you’re interested in a few anecdotal experiences from the Korean class I took recently (see below for links to other posts), read on.

Let’s first be clear about what we’re discussing here. It’s been pointed out, quite rightly in my opinion, that the line between grammar (more often than not meaning verb morphology) and lexis (meaning words and phrases) is a thin and blurry one. The theory goes that when teaching polite offers, it is probably easier to define the underlined part “Would you like to go to dinner with me?” as a whole chunk of language, rather than breaking it down into a modal plus a main verb with infinitive complement (if those are even the right terms). However it is defined and taught though, this is what I want to discuss in this post: a pre-selected, discrete item  presented for learning by the teacher or the syllabus, the kind of which makes up the majority of general English courses. For the moment I’m going to leave aside single words and very short phrases – those are for a future post.

My Korean course seemed to be organised around topic and text; judging by the somewhat scattergun approach, discrete items seemed to be selected based on their appearance in the texts rather than any linguistic developmental theories. Items were presented in the book as a kind of gloss below the reading with a formula (interestingly using English word classes – something like “N을 통해” /”Through [noun]”) and a couple of example sentences in Korean, which I often found fairly unhelpful is ascertaining the function of the item.

If the items were selected on the basis of appearance in texts, there would seem to be one major disadvantage: items will tend to appear more in writing than speaking. In a general course this leads to a serious imbalance between written and spoken registers, and for a learner like me who is much more focused on speaking, there is an inevitable switching off when the teacher says “written grammar”, leading to a serious lack of will to try to use it, not to mention a similar lack of opportunity. However, I do notice one of my classmates trying to use this grammar in speech, and I presume she is doing it for practice purposes and not because she doesn’t realise, and I wonder how helpful this might be.

One thing that I found unhelpful with the presentation of grammar functions was when they were presented in terms of a simpler function. Female teacher was very fond of presenting items like this: “you can say this easily as [something that we already know]”. She’s trying to be helpful and connect us to existing knowledge, but at this point my brain says something like: “if I can say it easily like that, why should I bother to learn to say it in a more difficult way?” (Wait, I’m just a rubbish language learner, aren’t I?) Maybe not, because for me there needs to be a comparison between the simple form and the complex form and their subtle differences, but this is not forthcoming. I will refrain from being too critical of the teacher here though, as I can think of times where I have done similar, for example presenting three different ways of expressing the same function at the same time, without pointing out how they might be different or considering that learning one might be enough for that class.

The teaching of grammar and chunks on my course could best be described as PP (the P that’s missing is produce), but there’s not even much presentation going on. Female teacher (sorry to keep picking on you, but you were the worst offender here) vaguely directs our attention to the example sentences and expects us to guess from context, but I am frequently unaware of the fact that I was even supposed to be looking in the first place. Even if I was, two example sentences with no explanation is simply not enough to grasp the concept, especially when there may be unknown vocabulary in those sentences, and the sentences are essentially decontextualised anyway (with hindsight, I realise I could just have looked back at the text to see the sentence in context, but it didn’t occur to me at the time). The result of all this is that while I’m still trying to grasp the basic meaning of the sentence, the class has moved on to the controlled practice stage.

Practice is facilitated by the workbook. We are given parts of sentences, and sometimes have to complete a matching exercise to establish the semantics. Then the task is to write out the sentence including the language item we are practising. Except it isn’t, because every teacher asks us to speak our answers immediately. Now, I like to think I’m ok at grammatical manipulation, but when the presentation stage has left me with such a thin grasp of the concept, this seems rather unfair, and I wish for some time to sit down and figure out quietly just what is going on. A further grievance is that of the half-personalisation that forces you to start a sentence that you really don’t want to complete. To return to my previously published diary extract:

“There’s a horrible moment where as a personalization thing I have to create an example of the difference between Korean and English girls. I struggle for something inoffensive, fail and settle for a fat/thin distinction. The girl opposite me sighs.”

Again, I can remember more than one occasion where I have asked students to do an exercise first orally, and I’m sure I’ve set similar half-personalisation exercises too. In future I’m at least going to consider the difficulty and newness of an item while deciding how best it might be practised, and also give students the opportunity to change or completely rewrite practice sentences.

It’s after the controlled practice stage that the teaching process ends. Just like that. This is partly because there are three or four short grammar points to cover from each unit, and so we rush on to the next one in order to fit them all in. On reflection, I don’t think the grammar was that important to the course designer; it’s only there to facilitate understanding of the texts. Ironically, I have often not even noticed the grammar/chunk when reading the text and have instead just skipped over it. This might explain some of my lack of interest in the grammar we are being taught – it doesn’t have enough semantic or functional weight to be worth learning. Here, I think, I’ve come to the point. There is very little recognition that the language that we are being taught could or will ever be used to do anything, nor that production of a feature is in any way important for understanding it or incorporating it into my Korean. This is partly the fault of the book, but some blame must also lie with the teachers. All the things that I might associate with this kind of language work –  goals, planning time, feedback, contextualized examples – are missing. In short, there is no teaching.

It is no surprise, then, that I can’t think of a single discrete item that we were taught on the course that has subsequently appeared in my spoken Korean. However, I have found myself using several features that I encountered in reading and listening texts; features that I was previously dimly aware of. Perhaps this tells us that language acquisition is a gradual process of becoming aware, noticing and finally using. Maybe the production stage of a PPP lesson and its various equivalents are superfluous. Still, I would like to have been given the opportunity to find out; I feel strangely cheated by not getting the chance to experience a single lesson with a grammar focus and clear output goals, even though I don’t believe that’s a particularly effective way of teaching.

I want to finish with a note on my teachers, who I have been fairly critical of in this post. All of them seemed to me to be to be friendly, patient, enthusiastic and wholehearted people with excellent content knowledge, and I was very happy to be taught by them. I am very much unaware of the forces in operation outside the classroom such as time or institutional pressure. I’m also aware that I see the classroom very much through Western eyes and there are all kinds of lurking prejudices that colour my perceptions. Thus, I hope you read this post in the spirit of honest enquiry, and I will leave you with some questions to ponder.

  • Is teaching like this enjoyable for the teacher? (How) do they think they are helping the students?
  • How representative is my classroom of other language teaching contexts in Korea? I am thinking in particular of English taught in schools.
  • I know that two of these teachers have MAs in foreign language teaching. I presume that they must have come across communicative approaches, PPP and the like? What stops this filtering into their practice?
  • Am I just being unnecessarily critical here?

Cheers,

Alex

Links to to other posts about this Korean course

Advertisements

TBV goes back to school: Selected diary extracts

Hi.

This post is intended both as a preview of some upcoming posts for the 2.4 people who are waiting for news of my recently finished Korean language class, and also a way of reviewing my notes from the whole experience in preparation for writing more detailed posts. During the course I was reasonably diligent about writing for 30 minutes a day about things that I noticed in class and how I thought I was progressing. The extracts below are from those writings, and might give you an idea of some of my raw reactions to the course. Apologies for any unpolished language, shouting and insensitivity that may occur.

Day one:

“It strikes me on the way in that language classes are MENTAL! You can forget as a teacher that gathering in a place to speak in another language is a fairly extraordinary thing to do, and learners often don’t have a clue how they are supposed to behave in this context. When I get to my classroom, there are two girls sat in the dark. I smile and issue a greeting in two languages, which gets little response. Silence and awkwardness descends, probably because nobody knows what language to speak. We are well outside our comfort zones before the teacher even enters the room.”

“One thing I note is that there is no effort at all to create a sense of a group, and no talking to each other initiated by the teacher, though thankfully at least four of us manage to get some chatting done in Korean and get to know each other a little. This to me is a big negative and maybe something that Korean teachers don’t consider so much in class?”
“Oh yeah. Paying 26,000 won more for a textbook when I’ve already paid 700,000 won for the course? Piss off.”
Day two:
“I feel like someone has tried to make Foie Gras by stuffing my brain so full of stuff that it explodes.”
“We quickly got sidetracked onto a discussion about whether nose shape was as important to Japanese people as Koreans (it isn’t). There wasn’t any feedback or sense that the teacher was listening. In fact, she went out of the room for a time.”
Day three:
“I’m feeling quite humble today. One thing you are maybe not aware of in class is quite the level of confusion amongst your students. Perhaps it doesn’t happen to you, but if it does are you wont to blame the students for not doing paying enough attention or not checking with you? I have been guilty of this in the past, but no more! This morning everyone turned up with different versions of what we were supposed to have done and we had to check with the teacher exactly what we were supposed to have done. We were almost all wrong too!”
“I’m finding myself becoming more and more of a fan of ICQs, just because they’d give us a chance to go over what was said one more time. Even asking “Do you understand?” would be a nice chance to say ‘no’.”
Day six:
“The teacher explains all of the vocabulary first, and then asks us to read aloud, filling in the blanks on the hoof. This is near impossible and really annoying, especially as I’m discovering that reading aloud focuses all of my energy on making the sounds rather than understanding the words and therefore is not helpful at all. I wonder if reading a phonetic and non- phonetic language aloud is a different cognitive process?”
“The whole segment is basically a disaster for me. The teacher assumes I will know words like 특징 (point of difference), which I don’t, and I spend the whole time struggling to stay afloat. I imagine the same is true for others, but the teacher never stops to find out. Once we’re through the reading, there are some comprehension questions that he asks and then answers straight away. At very few points are we left alone to read or think in peace.”
“Then again, I’m yet to experience a lesson structured around a clear target, at least one based on spoken output.”
Day seven:
“One thing that’s bothering me today is the sheer burden of the vocab learning on this course. Every day we are given 30 to 40 vocabulary words to learn, most of which are new (to me at least), and every day we are tested on them. The effort to get all of those into my memory is severely affecting the amount of work that I can put into other areas of language learning such as re-reading or pronuciation and it feels limiting. It’s bad enough having to get up at 6 am without having to study all of the way to school too.”
Day eight:
“We do some listening, and she breaks us into groups to discuss the answers. This is difficult because the people I worked with didn’t really say much. We fudge with the tapescript until the teacher tells us the answers. We then listen one more time with the teacher repeating. This is helpful in terms of making sense, but I would surely like to work a bit harder on the things that I didn’t know or didn’t hear.”
“There’s a horrible moment where as a personalization thing I have to create an example of the difference between Korean and English girls. I struggle for something inoffensive, fail and settle for a fat/ thin distinction. The girl opposite me sighs.”
Day nine:
“I would say that the big improvement has been in using Korean for the purposes of being a member of my class. I’m feeling noticeably more confident about speaking in public and using the respectful style and honorifics to other class members, even if I’m the oldest and these could generally be skipped. The confidence though could easily be ascribed to a getting used to new environs as to any meaningful language development.”
Day ten:
“I’ve found that I’m not very good at remembering to use stuff in general in class, unlike another girl who seems to be able to remember to jam things we’ve learned into conversations in class. Part of the reason is that a lot of the grammar we do is pointed out as more written and formal register, but this shouldn’t be an excuse. Still, some planning time would be great and I feel like I’m being denied the chance to create anything with language. I feel like a lot of the speaking that I do in class is not oriented towards language development, but more towards sharing ideas.”
 “I’m beginning to think of fossilization not so much in terms of errors, but in terms of ways of getting things done in the language, and I think that written input might be the best way to destabilize it.”
Day eleven:
“I think if I hear another unrelated anecdote I am likely to sink deep into a pit of incomprehensible despair. But at least I’m understanding, right?”
“In fact, I had got a bit lost towards the end of the first point, and it was the pause, not any structural knowledge that alerted me to the fact that something new was coming. And here’s the thing: do we really, really need to teach people to listen for pauses? Am I just such a go-getting, switched on language learner that I don’t have to be taught this stuff?”
Post-course:
“A final question is how much teachers of English and other relative majority languages should hold teachers of relatively minor languages to the same professional standards. I have almost effortless access to a raft of literature, blogs, conferences and colleagues from which and whom to learn.”
Reading those quotes back they actually paint a fairly accurate picture of my experience: really fascinating, yet not always for the right reasons. However, it did yield a fair amount of learning and confidence in my second language, and provided some really interesting insights into teaching and learning too. Writing this post has helped me develop a long list of things to blog about in longer form over the next few weeks, so stay tuned if you’re interested.
Cheers,
Alex.

Different approaches to writing: Reflecting on feedback

I’ve finally reached the end of the writing course that I have been teaching for the last four and a half weeks, about which you can read more here and here.  It turned out that I was so busy with the course that I didn’t have much time for blogging, something which I’m trying to make up for now. The course was a bit of an experiment, and thus to go some way towards collecting some experimental evidence, I gave myself 5 areas that I wanted to reflect on during the course. This post attempts to summarize some of what I think I found out about feedback.

I’m going to start by simply listing the kinds of feedback that were given over a typical week of the course, and who gave them to whom, along with any extra notes. Small group refers to a group of three or four students in which students work throughout the week, and English Cafe refers to a 10 minute optional one on one writing clinic style meeting.

  • Feedback on small group analysis of writing and linguistic characteristics of sample piece: Teacher > Whole Class. Chalk and talk style.
  • Feedback on essay plans: Small group > Student. We started this as a very structured event, but it ended up in the form of a brief informal chat about ideas only. Teacher > Student. Usually given as part of a five minute meeting to review plan and early writing in class. Further feedback available in English Cafe.
  • Feedback on first drafts: Small group > Student. Feeding back on elements of writing that we studied based on rubric and peer assessment sheet. Teacher > Student. In English Cafe, verbal.
  • Feedback on errors: Teacher > Student. Delivered via a system of error codes, with opportunities for further help. Small Group > Student. Peer-correction via error codes but scrapped after one week due to student feedback and course restructuring. Teacher > Whole Class. Feedback on common errors in the form of short presentations (also available as screencasts).
  • Feedback on final drafts: Teacher > Student. Given as a set of scores for the final piece based on a rubric for that week.

And here are some observations about the results of this feedback from my reflective journal.

  • Students really seemed to absorb the five paragraph essay structure in the first week. This was an explicit and lengthy focus of the first textual analysis, plus extra focus in small groups and one on one feedback. In the second week, structure was mentioned as part of the analysis, but not focused on. Some students struggled to clearly state an opinion and keep topics to one paragraph in the second week.
  • Other writing techniques that we focused one such as parallel grammatical structures don’t seem to be taken up. However, I do notice other phrases from my pieces that I had not highlighted  popping up in students’ pieces.
  • I do a lot of feeding back on plans, and shifting ideas around, asking questions etc. It seems like students generally find this helpful. I then do a lot more shuffling around of ideas at the writing stage with students who come to see me in English Cafe. These students are often the same ones whose plans I’ve shuffled around.
  • Students are surprisingly willing to rewrite paragraphs and even entire essays. Much more willing than I would have been on a foreign language course. Either that, or they are incredibly good at putting a brave face on it. When they do rewrite these paragraphs they often incorporate the ideas that we discussed and it does usually make for much better essays.
  • Much of the feedback on writing that I gave was useful for that week, but rather useless for following weeks as it wasn’t relevant to a different genre.
  • The amount of time that we had for working with errors was extremely limited, and explicit focus on grammar errors in class or group situations took up less than 10% of class time. Nevertheless students’ accuracy seemed to improve significantly over the course.
  • Students really struggled with punctuation. I suspect this is something that was new for many of them. After I made a brief presentation about conjunctions and periods not going together (usually) errors of this kind disappeared almost entirely. One student, having emailed me her essay, ran into my office in the morning in order to correct an error of this kind that she knew she had missed. Another student specifically mentioned this as being especially useful in the end of course survey.
  • In the same survey, students rated my advice on first drafts and error codes as the most important parts of the writing process for helping them to write good pieces. In general small group peer feedback tended to be rated least important, but very little was rated as not useful.
  • Students seemed to be fairly clearly divided between those who wanted feedback, and would seek it out, and those who didn’t want it and in some cases would try to avoid it. One student suggested that I make English Cafe a mandatory part of the course. To add to this, the student with the best English on the course was less than keen to seek my feedback.

So what does all of this mean in terms of my refelctive questions which I posed at the start of the course. First up, “What is the best way to deliver feedback?” Results from the two feedback surveys that I gave during the course both highlighted the importance of individual feedback from the teacher, and this meshes with my view of trying to deal with what we might call “learner syllabuses” on an individual level rather than as a group, as each “syllabus” will be at a different stage and so teaching discrete grammar items, and to some extent writing skills too, will either be wasted on those who already have them, or lost on those who are not ready. The error codes system does exactly this and I would consider it one of the most successful elements of the course (survey responses suggest that students feel the same). Furthermore, given the range of topics and ideas in the essays most writing problems have to be dealt with on an individual level. This was the kind of approach I set out to try at the beginning of the course and overall it seems to have been successful.

It is therefore tempting to suggest that what is required is an even more individualized approach, with a minimum of small group or class work. However, as far as I can see there are two major problems here. Firstly, individual feedback does not suit everyone, especially when a lot of it requires the student to seek it out. I felt that the student who suggested that I make the optional feedback mandatory was lacking initiative, but looking at it another way it could be seen as a request for help. There are a multitude of personal and cultural reasons which could prevent students from actively seeking out feedback, and I would be as well to remember that I wouldn’t have been too motivated to get help while I was at university. The flipside is that making it mandatory is totally not my style and risks making students opposed to the process, which is not a good mindset in which to receive feedback. The answer I think is to identify those students who I think might benefit from feedback but not seek it out, and encourage/push them a bit more. This is something I can do better as a teacher in general – it’s always nice when students want help, but sometimes the job is to help those who don’t want it or are just too shy or lazy to seek it out.

The second major point is that almost all of the whole class feedback that I gave seemed to be taken up effectively. The key here was that this kind of feedback was based on errors that emerged from the essays, which suggested that the bulk of students were ready for it. Clearly this could be delivered individually, but the workload in teaching a course like this is already high, and so delivering it to the whole class is much more efficient.

My second question was, “How can I make sure that feedback is taken on board?” In general, feedback that I gave to individuals and the whole class about their writing made it into their final essays. This was pleasing as I worked very hard on structuring the course in order to allow for the maximum amount of feedback and revision. Instead of teaching the writing process, we just did the writing process (and I had good feedback as to its usefulness). The error codes also attempted to get students to think for themselves about errors, rather than simply get corrections. As I said, all of this seemed to be reasonably successful, but as I didn’t have a control group, there’s nothing to draw a comparison with. Still, I feel like this approach is something that I would do again.

Finally, I wanted to tackle the role of conscious learning in this process. I was quite surprised at how little we were able to do as a class; I had sort of imagined that common errors would form the basis of quite a lot of grammar teaching. In the end I think I “taught” only one or two grammar points to the whole class. I also thought that errors might point the way to wider rules of language, which was the case a few times, but a lot of the times the errors were specific lexical ones related to word class or verb patterns, slips which students had momentarily forgotten the rule for, or sentences so awkward that they could not easily be fixed by the application of one or two grammatical or lexical tweaks. So really, traditional, structural teaching of grammar was almost absent from my class. Nevertheless, I seemed to be using a lot less error codes at the end of the course than I did at the beginning, so something must have been happening.

I’d like to suggest that this may have been more a case of attitude than of conscious learning. Although the activities where we worked with errors took up a minimal amount of class time, they were designed to raise awareness of errors. I have already talked about the error codes, but a further part of the teaching cycle was to have students analyze and present an error to a small group, focusing on why they made the error, how they could fix it, what they could learn from it, and how they could prevent it in future. It seems that, for this group of learners at least, general awareness raising may be the most important part of error correction, rather than any specific grammatical or lexical gains. However, it’s notable that the focus was on specific items of grammar and lexis, rather than a general “focus on accuracy”, but this seems to have led to much wider gains in the area of accuracy. Again though, there’s no control here for comparison.

In conclusion, I think that some form of individual feedback is necessary and I strongly believe, despite the lack of evidence, that it has to be given in a situation where it can be used immediately to maximize the chances of being taken up. It’s also necessary to remember that students may be resistant or uninterested in this kind of feedback, and it is up to the teacher to ensure that this feedback reaches these students, as they may be the ones who need it most. The real eye-opener from this post, however, is the role of error correction and language work in setting general attitudes, and the possible overall accuracy gains that can be achieved with even a small amount of specific items. What this might mean for future courses is students taking even more responsibility for finding their own errors and sharing them, but for now I’m way over my word limit and very hungry, so I’m calling it here.

Cheers,

Alex

 

Collecting feedback on my exams

In my last post I lamented my skills as an examiner in the written format, and suggested that I might need to gather some feedback on my exams. So I did! In this post I’m going to outline how, and what the results were. Note to readers: this one gets a bit long and statsy. Go forewarned.

The method

My last post was really the start of the feedback process, and was actually a good start as it allowed me to figure out exactly what I wanted to know. In bullet point form:

  • How students felt about their exam score.
  • Possible reasons that they received that score.
  • Possible effects that their score may have.
  • If students felt the exam was a fair test.
  • How the exam could be improved.

From these basic ideas I generated a list of questions, and had one of our teaching assistants translate them into Korean. I chose to use the L1 to maximize return, in the belief that it would help students understand and complete the form better. I then created a Google Form (very simple!) using the questions and a likert-like 1-5 scale from strongly disagree to strongly agree. I chose numerical response in order to minimize translation on my part too, and because it would give me some stats to play with. I sent a link to the form to students via Kakao Talk. I got a fairly useful 40 responses out of 57, perhaps helped by a carrot of free drink prizes from the university cafe. This is what the results said.

How did students feel about their exams?

  • I was happy with the result of my WRITTEN exam (mean = 3.56, SD=1.16)
  • I was happy with the result of my SPEAKING exam (mean = 3.54, SD=1.02)

Student responses indicated that they were similarly happy with their written exam score and their speaking exam score . This was not what I expected given what I thought was the relative difficulty of the exams, and my happiness with student performances. However, I then remembered that I’d adjusted the written scores upward in order to fit them into a curve, and wondered if this was the reason. I decided to look at mean exam scores for some extra insight. However, looking at the numbers, I came across a small problem. I’m dealing with three different classes’ exams and survey responses, without knowing if the classes are represented equally within the responses. Given that their exams were of different difficulties, this is a potential source of dodgy calculations.

Nevertheless, there’s no choice but to lump all of these scores together to give us the following:

  • Raw mean written score = 35.00 (sd=8.71)
  • Adjusted mean written score = 38.35 (sd=8.71)
  • Mean speaking score = 42.62 (sd=8.82).

Even having adjusted the scores upward, speaking scores were still more than 4 points higher, yet students were similarly happy with them. What I wonder is if the adjustment and some fairly generous grading in places (see last post) caused students to receive better scores than they expected. Sadly the data was collected anonymously, otherwise it would be interesting to see how these plotted against exam scores. Maybe students just have lower expectations – whatever the explanation this is a very strange phenomenon which might merit further investigation.

The second measure in the feeling category was about the effect on student confidence:

  • The WRITTEN exam made me feel more confident about my English (mean=3.38, sd=0.97)
  • The SPEAKING exam made me feel more confident about my English (mean=3.92, sd=0.89)

These results were much more predictable, though it’s perhaps a little odd that on the written exam where just one student got an A, many students still felt that it improved their confidence. Still, the written exam wasn’t an entirely positive result and rightly so. One thing that springs to mind is that I didn’t actually issue letter grades in this exam. Perhaps I should have done in order to give students a better idea of what I thought about their performance.

Reasons and effects

The second thing that I wanted to know was why students received their scores and how students might respond to their results. I devised three questions for each exam which tried to get to the amount of preparation had done for the exam and the amount of effort they expended generally. With the responses included, these were:

  • I studied more than one hour for the WRITTEN exam (mean=3.33, sd=1.31)
  • I used Anki (or another similar app) often this semester (mean=2.56, sd=1.08)
  • I take careful notes of new language from the board (mean 3.69, sd=1.01)
  • I practiced more than one hour for the SPEAKING exam (mean=3.33, sd=1.16)
  • I have done English Cafe 3 times or more this semester (mean=3.56, sd=1.48)
  • I try to practise English speaking outside of class or English Cafe (mean=3.35, sd=1.05)

The first three items generally relate to the written exam and recommended behaviours. The second three relate to speaking exam and out of class practice (English cafe is the optional conversation slots that students can sign up for with teachers). The first set suggest that students didn’t prepare a great deal for their written exam, either in the period immediately before it or during the half-semester using the spaced repetition system that I recommended. The response to the third questions rings true to what I see in class, namely that notes are diligently taken, and the scores further my suspicions that these notes are then promptly forgotten as soon as students get out of the door. From these results, I clearly need to think more about how to get students to maintain the language that we encounter in class, but that’s for another post.

For the speaking exam, again preparation is reasonably low, though interpreting these results more than half of the students spent more than an hour on it, and a surprising number claim that they try to practise English outside of class. I’d be interested to know what form this practice takes.

To survey the effects of the exam, I asked some similar questions:

  • Because of my WRITTEN exam score, I will try to use Anki (or another similar app) (mean=3.28, sd=1.04)
  • Because of my WRITTEN exam score, I will try to take better notes in class (mean=4.10, sd=0.70)
  • Because of my SPEAKING exam score, I will try to practise speaking more outside class (mean=4.05, sd=0.85)

These results for the written exam are rather interesting, in that it’s the behaviour that students already consider that they do well that they also consider needs improving (though I suppose that I can’t rule out the chance that it was the students who responded negatively to the previous question about note taking who are responding positively to this one). What I am potentially getting into here is the difficulty of changing ingrained practices –  a lack of genuine engagement with language and perhaps an over-reliance on cramming rather than long-term learning. This was what I’d hoped to combat by using the app, as well as allowing myself to do a lot more work with lexis. Here, however, the student choice seems to be for the path of least resistance. While there is a chance that the app I recommended simply doesn’t fit well with the students, I see this as indicative of an underlying culture of shallow and temporary learning that I would like to do my best to change.

Was it a fair test?

Perhaps the main motivation for writing my last post was the fear that as an examiner I was letting my students down, and causing their scores to be lower than they actually deserved. I’d hate for the trust I have built up with these groups to be damaged by a poorly written exam. The following questions were , therefore, an attempt to see how students evaluated the exam and their own performance.

  • I thought that the WRITTEN exam was a fair test of class content (mean=4.27, sd=0.86)
  • If I had studied harder, I could have got a higher score on the WRITTEN exam. (mean=4.46, sd=0.75)
  • I thought that the SPEAKING exam was a fair test of class content (mean=4.42, sd=0.82)

These are a pleasing set of results for my peace of mind. These are some of the highest mean scores, so at least in the students’ minds (much more so than mine) I am a fair/competent examiner. The second question also shows that they tend to attribute their low scores to their own effort rather than deficiencies in the exam. This might be reflective of a less critical view of exams, however. For each exam, only two students disagreed that it was a fair test.  Still, the largely positive response suggests that I haven’t irreparably damaged my relationship with the group. This in no way excuses me from making improvements though.

Improvements for future exams

Finally, I wanted to know how students thought that I could improve the exam. I also wanted their view on my idea that the exam could feature slightly extended writing pieces in order to get away from the kind of half-open questions that plagued this exam.

  • I would prefer more extended writing/communciation in the WRITTEN exam, and less vocabulary and grammar questions. (mean=4.26, sd=1.06)

While there’s a bit of variation in answers here, students seem to be more positive than negative about this. I’m undergoing a bit of a shift in thinking about writing at the moment anyway, and trying to include a few more writing assignments in class, so my next exam could/should include a writing section.

Finally I included an open field for students to suggest improvements to the written and spoken exams. Suggestions included less grammar (funny as there really wasn’t much – my students perhaps view grammar differently to how I do), and there were comments that the listening section was too heavily weighted (which I might agree with) and that the questions started very suddenly (an easy fix). One student picked up on the fact that the written questions were too open, and another claimed that he couldn’t see the pictures well.

Speaking-wise there wasn’t much of interest except for a request to see the time, which I will definitely try to organize for the next exam.

Reflection on Reflection

All in all I’m reasonably happy with the way that this went. I learned a lot from it, and I hope it also gave the students a sense of agency in deciding how they are examined. I also hope that doing the survey helped students to reflect on their own behaviour, attribute their successes and failures to the right reasons and hopefully do something differently next time. As for what I might do differently again, the one change that springs to mind is to try to collect feedback with names – it would be very interesting to see how responses correlated with actual exam scores, and also to do this for individual classes rather than all of my students as a group.

Final Word

Thanks very much for reading if you got this far. If you’d like to try this yourself, please feel free to use the Google Form linked above for your own investigations, and if there’s anything you’d like to chat about please do leave a comment below.  If you do try something like this, I’d be very keen to know how it turned out.

Cheers,

Alex

What I’m going to think about the next time I write an exam

Usually proctoring (or invigilating in UK English) written exams at my university is a somewhat trying experience. Trying because I sit at the front of the classroom for over an hour in silence  punctuated only by the frustrated sighing of my students. Looking out I see a sea of furrowed brows, scratched heads and, occasionally, expressions of total mental capitulation.  The reasons for this are twofold. Firstly, students have quite often prioritized other studies (possibly including studying the effects of drinking and computer games on exam scores) over English and therefore aren’t especially well prepared for the exam. It’s important for me to recognize this as an examiner and to accept that I can’t write an exam that pleases everyone, especially those who don’t bother to prepare. However, the second reason for the atmosphere of general malaise in the exam room is that I am still far from a good writer of exams, and this is something that I would like to improve. This post will be a slightly self indulgent one (aren’t they all?) in which I have a look at what I did and what I can do better. I’m going to come back to this each time I write an exam to remind myself, and I’m putting it out there in case there’s anything to be learned from it for others.

Let’s start with the specifics. The worst question that I wrote on this exam (about a very common mistake) went like this:

Correct (수정) the underlined word in the sentence (1 point) and write it again on the line below using different language, but keeping the same meaning. (1 point)
3. I’m going on a date. I bought new shoes and jeans to look gentle.

This is fine as far as the first ‘(1 point)’, but then gets very confusing. So much so, in fact, that when grading the exam I misunderstood my own instructions and only marked the first part of the question and not the second. I was confused by students offering different versions of both of the word and sentence. There are two main problems here. The first is that the pronoun ‘it’ in the instructions could refer to either the word or the sentence, and here it’s more likely referring to the word. Largely this is just crap writing on my part, but it does also point to a wider issue that pronouns are an area of confusion for low level students and something that I should perhaps try to avoid in future.

The second problem here is that the instruction is not particularly clear anyway, especially if you’re reading the sentence on this blog. What I intended in writing the question was to challenge students to use a couple of other ways of expressing cause (“because I wanted to / so I would”), but without some form guidance it relies on students remembering the classroom context, and essentially turns the exam into a game of ‘guess what the teacher wants us to say’, which I would sincerely like my exams, and class in general, not to be. Next time I need to remember that it’s dangerous to rely on classroom context too much, and that anyone sitting down to take my exams should be able to supply the answers from a good knowledge of English.

While reflecting on this exam I wondered whether an example would have helped, but there was only one question of this type on the exam. It’s also very difficult to exemplify something like this without giving the answer away. However, I could have easily supplied a hint in the form of “because” and/or “so” as a prompt.

This is a general pattern in my exam writing. My question prompts tend to be too open, and this probably confuses students and also makes grading more difficult. Take these two examples:

Think of a movie that you saw recently. Write a sentence about parts of the movie. You must use some of the language that we used in class in each sentence.

Respond to these questions and give some helpful extra information.

Again, these are really hard to interpret without classroom context. What’s worse, in the first part, is that it doesn’t even call for successful or interesting use of the loosely defined “language we used in class”, but simply that it be used. This leads to answers like “his facial emotion is emotional”, which I feel like from the instructions deserves at least partial credit as we talked about emotional as a way to describe acting. The second instruction is a little bit better, but still requires much more clarity. What I wanted students to do was answer a yes/no question and supply a little bit more information in order to help the conversation to progress. Again this led to some strange answers that were difficult to grade. I also mixed some questions that followed on from each other with others that didn’t without really specifying which was which, and based following questions on expected answers to previous questions, answers which students didn’t give in some cases, making it impossible to answer the next question. On reflection the whole thing would have been much better set as a discourse completion task – something which would suit the conversation based nature of the class much better anyway.

These problems are symptomatic of a tension between language work and communication work that I often feel both in class and when writing exams. Largely my class is a conversation based one, with the emphasis on just saying something rather than saying something ‘correctly’. Prompts like the two under discussion here are an attempt to mirror that in an exam, but then they have to graded as such, and it’s difficult to know where to draw the line in terms of understanding or interest. Something which might go over fine between two students in conversation can look pretty senseless written down.

Basically, these prompts are me getting caught between assessing communication and assessing language (though I’d accept that there may not be a clear space between them in which to get caught). I either have to go one way or the other into a more open writing prompt with a rubric, or to more language based assessment; I can see plenty of good reasons not to do either. Asking my students to write extendedly in an exam seems unfair if we don’t do any writing in class*. On the other hand, a totally language knowledge based exam doesn’t seem to be in the spirit of the class, might require spending more time in class looking at language, and would probably be even more difficult than this exam was, as a lot of the marks that the students did get came from open prompts.

This I think is the last thing I want to talk about here, which is difficulty and grading. As I mentioned above, this exam was difficult for the students, as the two histograms below hint at.

Exam Result Histograms

On the diagram above, 5 refers to students scoring between 0 and 5 out of a maximum of 50. Bearing in mind that I work on around 90% being an A, and somewhere in the low 80%’s being a B, this exam left nobody getting an A and only 4 of 40 students getting a B. Honestly this is probably a-whole-nother blog post in itself, but clearly something is wrong here. Either the students are not learning what I think they are, or I am not giving them enough time in class to learn the stuff that I think is important, or they’re not learning full stop. When setting exams I’m definitely drawn to learning, and I hate setting questions about things that students should already know, but maybe that’s necessary to move the distribution up a bit. However I need to consider the kind of effect that it might have on students – will these marks give them a bit of a kick up the arse, or will they shatter the confidence that I had done pretty well at building up over the semester? Perhaps it might be a good time to collect some feedback?

I think I’ve got almost as far as I’m going to get with this post, but I’d welcome any thoughts anyone has on this, as I feel like I’ve made a little progress here, but there’s still some way to go. As a final bonus, here’s some other things that I need to think about next time:

  • Using the British “maths” leads to all sorts of subject-verb agreement horrors.
  • Be careful when using “repeat” if I really mean “rephrase”.
  • How important is spelling? Is “claims” an acceptable attempt at “clams” if I tend to de-emphasise the importance of spelling. How about “cramps”?
  • How can I make listening questions more difficult. Could I think about speaking faster or using a different accent?

 Cheers,

Alex

* Although if the rubric assessed students in a similar way to our classwork (eg. content, understandability, interest) I guess it wouldn’t be so bad.

Class constructs: creating my own (part 1)

I blogged previously about the possibility of creating a construct for a short term class in order to keep teaching and testing in line with one another. There is also the advantage that your construct can be shared with students as a form of class goal, and activities can be justified to students in terms of it (especially if they are of the less fun variety). As a brief recap, a construct is a short statement of what you will teach and test, how you will go about it and the expected results and standards. In this post, I will document the first part of the process of creating my own construct.

At the end of the last post I looked at 4 areas that need to be considered in creating a construct. These were:

  • Assessment (& teaching) context (Students, institution, geographical location, purpose and score use and tester).
  • Assessment (& teaching) procedures (What students are expected to do in class and exams)
  • Construct definition (What do you mean by the terms used to describe your class – what is “English”, “Conversation” or “Speaking” for this class?)
  • Models and Frameworks (How can you justify the above with reference to clever people or yourself?)

In this post I will try to outline my thoughts on the first two areas.

Assessment and Teaching Context

A good place to start here is asking who my students are. In my case this also covers a lot of the geographical and institutional factors. Beautiful and unique snowflakes that they of course are, my lot do form quite a usefully homogenous group in two ways. Firstly, they are all Korean and are products of the educational culture here, and secondly they are all students at a polytechnic university. This allows me to make some guiding assumptions:

  • Their English education will have been largely reading and listening focused, and grammar and vocabulary will often have been decontextualized and almost always depersonalized. If they have encountered speaking they have not been especially successful in learning it. I’d venture to say that they have generally learned English as an academic subject rather than a language.
  • They are not taking English as a major, and so they are unlikely to be learning it out of a love for the subject (though this is possible). They are more likely to be learning it out of long-term pragmatic value, but in the short-term their grade is the most important factor. Their future careers are more likely to require practical, rather than perfect, English.

In terms of assessment purpose and score use, one or two things are worth considering. Firstly, I’m aiming to assess achievement not proficiency. In other words, someone who makes a great effort and improves from 0 to intermediate should theoretically score higher than an initially high-intermediate speaker who improves little. Secondly, assessment is not only in terms of exams, but performed continuously over the term through participation, quizzes, projects and 1:1 conversation. The scores have a very narrow use, which is assigning grades for the term. However these grades may dictate scholarships, so it is important that they accurately reflect effort and achievement.

One final consideration is who the assessor is. For the most part it is me, but I do feel that student views should play a part in assessment as well, especially in something as subjective as participation. I think allowing students to play a part in scoring themselves and others also helps to motivate them, as well as keeping complaints down at final grading time.

Assessment and Teaching Procedures

In assessing and teaching the course I want to take the notion of “conversation” as literally as possible.  By this I mean that the aim of the course will be to develop the ability to hold medium length conversations in English on a few topics, and we will learn to do this by having short conversations throughout the course, which will serve as a framework for practicing useful lexis, conversational skills and strategies and a little bit of grammar.

Given this aim it makes sense for the mid-term and final speaking exams to take the form of conversations. This will form the principal drive for the course, and students will be expected to apply what they have learned during class in the exams. The length of the exam is important, as it should be sufficient to pose a real challenge to students (or at least appear to).

Also significant is the number of participants. This is a really interesting question that I am still working on puzzling out. My preference in the past has been for 4 person speaking assessments. I believe that they pose a greater degree of challenge in terms of organizing turns and dealing with multiple inputs. They’re also practically much easier to arrange and going back to the length, I think that a 25 minute 4 person exam sounds more difficult than a 12.5 minute group conversation. The potential downside to this is that a lot of my classwork is done in pairs, though there is nothing to say that I couldn’t up group size over the course of a semester.

Another thing to figure out is the role of written exams. It is institutionally mandated that 50% of my mid-term and final exams is a written paper. What, then, is the role of writing in conversation? Listening might provide some of those marks, perhaps choosing the right answer to a question. The discrimination of similar sounds could also be included.  I also think that common errors that we point out in class should have a role. Finally, vocabulary and lexis in the form of gap fills will be important, as well as subtler shades of meaning that we talked about in class that simply won’t come up in a speaking exam. As far as possible, I would like to avoid grammar transformation exercises and reading passages. 

All of this and I’m only really through talking about final assessments. Ongoing assessments (quizzes and participation scores) should also be generally conversation based, and reflect the effort made to actually have conversations, on the basis that conversational skills cover a wide range of areas, and are probably subject to individual variation. It’s developing an individual ability to have conversations that I am most interested in during this course. Partly this can be taught directly in terms of strategies and language  but partly this is something that you figure out for yourself by getting involved. The course needs to both offer opportunities to do this and reward them when they are taken.

To bring this post to a conclusion, as I am already over my self-imposed 1,000 word guideline, my teaching and assessment aims should be to improve speaking as this is the area in which my students need most improvement. A conversation based approach gives an opportunity for personalizing the language as well as providing a reasonably well defined structure for assessment (see the next post). Conversation must form the basis for ongoing and final assessment of achievement on the course, with an emphasis on fluency and communication skills rather than accuracy (or complexity especially). The ability to deal with small group work is thought to be important, as is the ability to function in English speaking environments for a slightly longer duration.

In the next post I’m going to tackle my description of conversation. I hope you’ll be there to read it. In the meantime if you have comments, questions or suggestions, please leave them below the line.

Cheers,

Alex

Class constructs: an introduction.

16 weeks, roughly 5 hours of class time in each. Throw in a couple of presentations and a magazine making project, as well as exams, entrance tests and university festivals, and it doesn’t leave a lot of time for learning something as large as a language. Nevertheless, we grab our textbooks and have a go – and while we do so we also try and order ourselves for the dishing out of grades or levels. Basically the two problems I imagine that many teachers with some autonomy grapple with: what to teach, and how to assess it. In this post I’m going to set the background for creating class constructs that go some way to tackling this problem.

Construct is a term drawn from assessment literature, and is a more or less a statement of what the test author believes they are testing, how they should test it, and what the results might look like. As an example, a construct for the TOEFL exam would be a definition of the English ability required to take a higher education course, perhaps in terms of vocabulary size, grammatical knowledge, skills (summarizing, note taking), functions, knowledge of genres and many other things. It would also include the kind of tasks that the authors felt would test these, and what acceptable and unacceptable performances looked like. All of this is realized in the test that is actually taken, and the rating scales, scoring and the final grade. Therefore, if you score a full 120 on the TOEFL IBT, you can congratulate yourself on being the embodiment of what ETS (the makers of TOEFL) think academic English is.

“Teaching to the test” gets a bit of a bad rep, especially in Korea where anything that isn’t an academic reading passage is ruthlessly cast aside. It feels a bit dirty to be honest, like you’re being cowed by the man – encouraging your students to chase letter and number grades over actually learning anything useful, or teaching test-taking strategies rather than language. If the test is crap (TOEIC, the Korean university entrance exam) then this is abundantly true, but if the test is good, then surely this can be a good thing (these two situations tend to be called negative and positive washback respectively). For a short course test such as mine, which is aimed at measuring learning, control of the design should play a large part in deciding what should be learned (though we know that this is not an exact science), and so a construct not only defines the construction of a test, but in this case the construction of the whole course.

But why exactly is this useful? Firstly, going back to the opening sentence, time is short, English is not only big but constantly shifting. With hundreds of thousands of words, not to mention fixed phrases, as well as countless combinations of functions, domains of use, registers and skills, pinning English down to something teachable is constant source of frustration and argument in journal articles, blogs and at conferences. General English courses (in the form of books) try to tread the most middling, inoffensive and general line, in order not to upset anyone into not buying them. However, this means they also tend to miss out anything culturally specific, potentially insulting or simply left-field. Having a construct allows you to cut out the irrelevant stuff and focus on what your students (and you!) really want and need. In my case, students can translate about 3000 single words in English, and have a pretty decent reading level. Their grammar is OK if they can write it out first, but spoken interaction is often conducted in single words at the beginning of the course. They also have very little knowledge outside the academic register. I’ve talked a lot about this already (and will again), but safe to say that concentrating on speaking skills almost exclusively is a good  bet.

The second advantage that I can see for developing a construct for the class is that if you want the exam to dictate teaching, you theoretically should write the exam first. The problem of course, is that it’s difficult to write an exam based on content that you haven’t taught yet, especially if your course is based a lot on lexis that arises from what students say, rather than being planned in advance. A construct for the class provides a nice straight ledge for aligning one’s ducks on, and if teaching and testing are conducted with reference to it then the two should reflect and reinforce each other. This hopefully will help me to tackle two problems that I’ve encountered in previous semesters – difficulty in writing exams that accurately reflect what we have done in class, and also the fact that in feedback I tend to score low on questions about students understanding my goals. As an extra idea, there would of course be nothing to stop you designing a construct in collaboration with your students.

So what goes into designing a construct? I’m going to finish this post by examining in a little more detail the kind of thinking that one might need to do, and presenting the questions that might need to be answered. In doing this I’m drawing heavily on the work of Sari Luoma (2004) on speaking assessment, though these considerations could easily be adapted to other assessment concepts.

Assessment Context

A construct links the theoretical with the more concrete (though of course this is still within a context of a test, which itself is often a prediction of how a testee would fare in the real world). Part of this is defining the context of the test – institution, purpose, takers and backgrounds, the tester and the plans for score use. While the theoretical definition for speaking might be the same for young children, teenagers and young adults might be similar, the ways of eliciting speech (task type, topic) will be very different, so context here is extremely important.

Assessment Procedures

A construct should have some indication of the length and frequency of the assessment, as well as the tasks required to elicit it and the methods used to score it. This helps keep things practical (no sense in having hour long one on one speaking tests when you teach 200 students) as well as, in the case of my class constructs, meaning that class activities can mirror testing activities.

Construct Definition

What are you actually going to try to teach and test here? The more specific you can be here the better, so you might want to think about sub-skills, grammatical structures and vocabulary ranges, rather than something general like speaking. You should also consider what a good, average and bad performance might look like in these terms. All of this will help greatly in designing rating scales and rating performances.

Models and Frameworks

What’s even better is if you can relate the thinking above to reading that you’ve done in the area. An example of this might be Hymes’s SPEAKING framework. This gives you a base to work from in terms of teaching and learning.

A Construct Definition

Finally, you should attempt to summarize all of the thinking above into a neat little paragraph like the one below:

The aim of this test (class) is to assess (teach/improve) the examinees’ ability  to express their ideas in English, take their interlocutor’s contributions into account and make use of them in the discussion, and collaborate in the creation of interaction. Social appropriateness is not assessed (taught) explicitly. (Luoma 2004: 121).

So that is roughly what a construct design process looks like. In the next post or two I’m going to have a go at it myself. In the meantime I’d be interested to know your views on whether this is a sensible approach. Are there any downsides to working this way? Am I consigning my students to a life of exam hell? Any argument very much welcomed below the line.

Cheers,

Alex

Reference

Luoma, S. (2004). Assessing speaking. Ernst Klett Sprachen.