Tag Archives: Assessment

10 presentation tips for students in the form of a letter

Dear students,

I am writing this letter after watching last semester’s students do their presentations. Overall, I was quite disappointed with their presentations. You are reading this because I don’t want you to make the same mistakes. Paying attention to this letter should lead to a  higher grade for you, so please take a minute to read it.

Your predecessors (last semester’s students) made one big mistake. They did not read the scoring system, or the presentation rules. The scoring system and the rules help me to give you a grade, but they also help you to do a good presentation. However, many students ignored the rules and the system, did poor presentations, and so got low scores.

As a teacher, I feel responsible for this. Maybe I didn’t explain clearly why the scoring system is like that, so I will do it here. My beliefs about presentations are:

  • YOU are the most important part of your presentation. We want to know what YOU know; what YOU feel; what YOU think. The best presentations this year were things that people were passionate about or were very personal. We also want to hear YOUR ENGLISH.
  • THE AUDIENCE is very important too. They want to learn something from you, and be entertained or interested by you. Also, they want you to communicate with them.
  • Your presentation needs presentation skills that you can use again and again at university and in your career. Almost everyone will have to present something at some time. These skills are very important, and very different from normal speaking. If you don’t learn these skills, you will find this presentation difficult, and many other things difficult.

Based on these thoughts, here are some practical tips for you:

  1. Choose a topic that is personal to you. It can be a personal story, an interest or a theory. Also, think about if the audience will be interested. Don’t just look up something on the internet that you don’t know and don’t really care about.
  2. Structure your presentation carefully. Think about an introduction, a conclusion and two or three key points. If you try to do more than this, your presentation will not have enough detail.
  3. When you design your slides, the information on them should add to what you are saying. Instead of writing your three key points on a slide, find pictures to represent them. If you have difficult words or numbers, you should write these on your slides to help the audience understand.
  4. DON’T WRITE A SPEECH! Presenting is not the same as reading. Speaking and writing are quite different.  Also, memorizing your speech is very difficult. If you write a five minute speech, and try to memorize it, it will take you at least two hours. In that time, you could just practice explaining twenty times! If you do this, your presentation could be twenty times better!
  5. Ideally, you should not look at your notes during your presentation. They are there to help you if you forget. Your notes should be key points, words and one or two sentences only. You should never read more than one sentence from them.
  6. Your English does NOT have to be perfect. Your English does NOT have to be very complicated. Your English HAS to be understandable. This means that you should not look up too many words in a dictionary, or copy writing from the internet. It also means that you should check your pronunciation of difficult words carefully (especially if they are in the title). It also means that you should speak slowly and simply, and check that the audience is understanding.
  7. There should be NO KOREAN in your presentation. The challenge here is to make yourself understood in English, with help from pictures and gestures. You should imagine that your audience is from Thailand, and cannot speak Korean or read Hangeul.
  8. Keep to the time limit. You should practice your presentation before and check that it lasts five minutes. During the presentation, don’t be afraid to cut things so that you finish in time. Have something extra planned in case you finish early too.
  9. Presenting is about communicating with your audience. Look at them, smile at them, talk to them, check that they understand. Ask them questions. Tell them a joke. Surprise or shock them. There are many ways to keep them interested. Keep them in your mind at all times during planning and presenting.
  10. Lastly, and most importantly, PRACTICE. Presenting is about standing up, speaking loudly and slowly, changing slides, and talking to people. So, you should practice like this. Imagine you are really presenting. Go home and present to your parents, grandparents or your younger brother. Presenting always feels strange the first time, and then less strange each time after.  It’s better to feel strange in front of them than your teacher, your friends and the girl/boy you are secretly in love with.

Finally, let me share some of this semester’s best presentations. Notice that most of them are very personal.

  • The rules of basketball
  • Working in an Izakaya
  • Dates I would like to go on
  • UFO sightings
  • Unknown webtoons
  • My first love story
  • The end of Inception
  • Three restaurant special events
  • Three ways to measure your height

Thank you for reading, and best of luck with your presentations.

Alex

 

TBV’s Notes

As you can see from the letter, I wrote this as a way to turn what was a reasonably negative and frustrating experience into what will, I hope, be a more positive one next time. This is also a way to spread information to students in one useful lump, rather than feeding it in piecemeal as I did this time. In general, this project was very rushed and I think that next time this will help me to think about what is important, and the things that I need to do in order to structure the project better and give students the best chance of success. What I would like to do next time is do the practice in class if possible, and get students to develop their presentation from a fairly casual explanation to a friend, into something more formal in small groups and finally into an actual presentation.

Looking back over the tips, the “NO KOREAN” sticks out. I feel like I should (defensively) mention that in general I am fairly pro-L1 in class in the right context, but I also think that students tend to use it as a crutch when things get difficult in English.

I am undecided whether to actually give this letter to students next semester, but I’m leaning towards it. It is, at least, a useful reminder for me of what to concentrate on next time. Feel free to share it with your students, and do let me know if there’s anything that you’d change or add.

Cheers,

Alex

PS I feel like I have lifted this posting style quite shamelessly from Mr. Michael Griffin. You can check out his blog here.

 

Advertisements

Reflecting on my speaking exams

This is the third and final part of my short series reflecting on my mid-term exams. Timely too, seeing as I’m writing the final exams this week. If you’re interested, you can read about my reflections on my written exams here, and the feedback I collected on my exams here. In general I’m much happier with my skills as a speaking examiner, but in the student feedback that I collected there was still room for improvement. This post takes a bit of an experiential direction, beginning by looking at what I do, then what I think/thought about it, and finishing my trying to make some changes for this round of exams.

What did I do last time?

I’m going to deal with this in two parts, my method and my scoring system. In fact, I’m mostly concentrating on the scoring system, as it seems the most appropriate tool for helping generate the performances that I would like from my students. Nevertheless, I’ll start with my method.

My mid-term speaking exams were 20 minute conversations between groups of four people. These groups were randomly selected a few days before the exam. Students could choose any or all of the four topics that we had studied in the half semester, and could prepare what they wanted to say, although they were discouraged from memorizing long passages of text.    

You can read my full scoring system on the second page of the document below. The language used is necessarily simplistic in order that students can understand it, but this is perhaps a problem when it comes to judging fine-grained differences in performance.

Midterm – Speaking Exam (Level 2.2)

My scoring system scores students on five traits, each scored from one to five (yes, students get a full 20% just for showing up). A score of three represents a pass in each trait. Half marks are possible. The five traits are as follows:

  • Difficulty & Interest
  • Participation
  • Fluency
  • Understanding
  • Effort.

Difficulty and interest requires the student to use more complex language and to talk about interesting things within the topic. Participation asks the student to play a full role in the conversation. Fluency requires them to speak at a comfortable speed, with no big hesitations. Understanding is that of the teacher, but more importantly their peers too. Effort is my attempt to motivate students of both higher and lower ability coming in to the course, by challenging them to exceed my expectations. 

What do/did I think about it?

The slash in the title above refers to the fact that I scribbled down a few reflections during my speaking exams last time. Other insights are coming from thinking about the exams as I write this. My method, I think, is fairly suitable. It gives enough freedom to students to express themselves and is in keeping with the fairly fluency based nature of the class. Also, a four way conversation is a more challenging proposition than one between two people, and students have to work a little harder to stay involved and follow what is going on. There is also the efficiency saving of only explaining to five groups per class, rather than ten if I did it in pairs. As time is limited, this is a very practical reason to test in larger groups.

All this means that there is fairly little that I want to change. The only thing that I wonder about is changing the number of topics, and their specificity. Four topics, between four people in a twenty minute exam leaves about one minute 15 seconds per person, per topic. In general, testing in class aimed toward being able to speak for a two and a half minutes per person, per topic. Thus one change that I would like to make is to limit groups to two topics, and also to make them more specific. Last time I had very loose topic prompts (eg. Favourite foods, shopping style and stories). This time I’d like to tighten them up a little bit, for example: “My ambition and what I have done to achieve it” for the personal background module. I’d also like to increase the spontaneity a bit by selecting a topic randomly. This might also require a change to the scoring system to reflect this.

Thinking about scoring systems, it’s clear to me that mine needs a bit of work. The main thing is that it perhaps doesn’t reflect clearly exactly what kind of performance I was looking for. This is due to the lack of a clearly defined construct, a project which I never quite got around to finishing properly. Nevertheless, I have tried to briefly outline a construct below. These are the kind of things that students should perhaps be able to do in their exams. This is based largely on grading notes from my last exam.

Students should be able to talk in a reasonable amount of detail about 2 topics as part of a twenty minute conversation, making the conversation interesting through a variety of opinions (backed up if possible), personal stories and unusual information/facts. Students should be able to organize the conversation into short turns rather than long monologues, and be able to both claim and relinquish the floor when appropriate. The conversation should be relatively spontaneous. All speech should be understandable (to both peers and teacher) and fluent (defined as a steady rate of speech with minimal hesitation and restarts). Accuracy in grammar, word choice, syntax and pronunciation is not important unless it hinders understanding, but errors that were explicitly discussed in class should be avoided. Some attempt to (correctly) use language from class is preferable, but long memorized passages are not. No Korean language, aside from names, is permitted.

Looking at the scoring system linked above, I can see several places in which it does not match the construct and needs to be changed. The first is the slightly odd category of difficulty and interest. This is a bit counter-intuitive because, as anyone who’s ever attended one of my conference presentations will tell you, it’s perfectly possible to say something very boring using difficult language, and of course the other way around. Clearly this needs to be split out. Looking back at my notes, my way of judging difficulty seems to be to note instances of target language use. Therefore, it makes sense to split this out into its own category (more on this later). This leaves us with the rather subjective category of “interest”. Again, I went back to my notes on this one, and found that the performances I scored highly tended to contain interesting stories, unusual information and strong opinions. This goes some way to making things less subjective, but much more importantly, gives the students a guide to how they can score top marks.

Another category that requires a little tweaking is participation. I’d like to include turn length, questions, turn management and amount said into a slightly updated rubric. The idea behind this is to make the conversations a bit more spontaneous and conversation-like, and avoid a problem I encountered occasionally last time of students essentially going round the table delivering monologues.

The fluency and understanding categories are largely fine as they are, though I want to add restarts into the fluency section. That just leaves me with the final section, effort. I like this section, as it gives the lower level speakers in my class something to aim for. I don’t like grading on ability only, as despite level testing it can vary quite widely in my classes. Again though, I’d like to be able to give students a little more guidance on how they might do it. This is a place where attempts to use target language can be recognized, along with not memorizing long pieces of language and speaking spontaneously. I could also try to recognize humour here. Finally, some recognition of shy students participating confidently would be good, as this is something that I have tried to encourage throughout the semester.

This just leaves a further section for penalty points. Given this is an English exam, speaking Korean except for names is not allowed and must be made clear. Also, long diversions from the topic should also be penalized as I am trying to get students to show what they learned in class. Finally, I think I want to punish errors that we have talked about in class, as these too are evidence of (not) learning.

What am I going to do about it?

When I make my exam guide on Wednesday, I’m going to do the following things:

  • Allow students to choose one topic in advance for the exam, and give them one of the other three in the exam.
  • Make the topics much more specific and relevant to class content.
  • Make the first category interest, defined as opinions, stories and interesting facts.
  • Add questions, turn management and turn length into the participation section.
  • Add restarts into the fluency section.
  • Write some notes in the “Effort” section, explaining to students how they can get better scores through spontaneous speech, humour and confidence.
  • Explain clearly the penalty points system.

These exam reflections have been pretty long, so thanks for reading this far. I’m interested in any ways which you think I could further improve this system, and also in how you do your own speaking exams. If you want to read more you might even want to check out @alexswalsh‘s post on his speaking exams.

Cheers,

Alex

Collecting feedback on my exams

In my last post I lamented my skills as an examiner in the written format, and suggested that I might need to gather some feedback on my exams. So I did! In this post I’m going to outline how, and what the results were. Note to readers: this one gets a bit long and statsy. Go forewarned.

The method

My last post was really the start of the feedback process, and was actually a good start as it allowed me to figure out exactly what I wanted to know. In bullet point form:

  • How students felt about their exam score.
  • Possible reasons that they received that score.
  • Possible effects that their score may have.
  • If students felt the exam was a fair test.
  • How the exam could be improved.

From these basic ideas I generated a list of questions, and had one of our teaching assistants translate them into Korean. I chose to use the L1 to maximize return, in the belief that it would help students understand and complete the form better. I then created a Google Form (very simple!) using the questions and a likert-like 1-5 scale from strongly disagree to strongly agree. I chose numerical response in order to minimize translation on my part too, and because it would give me some stats to play with. I sent a link to the form to students via Kakao Talk. I got a fairly useful 40 responses out of 57, perhaps helped by a carrot of free drink prizes from the university cafe. This is what the results said.

How did students feel about their exams?

  • I was happy with the result of my WRITTEN exam (mean = 3.56, SD=1.16)
  • I was happy with the result of my SPEAKING exam (mean = 3.54, SD=1.02)

Student responses indicated that they were similarly happy with their written exam score and their speaking exam score . This was not what I expected given what I thought was the relative difficulty of the exams, and my happiness with student performances. However, I then remembered that I’d adjusted the written scores upward in order to fit them into a curve, and wondered if this was the reason. I decided to look at mean exam scores for some extra insight. However, looking at the numbers, I came across a small problem. I’m dealing with three different classes’ exams and survey responses, without knowing if the classes are represented equally within the responses. Given that their exams were of different difficulties, this is a potential source of dodgy calculations.

Nevertheless, there’s no choice but to lump all of these scores together to give us the following:

  • Raw mean written score = 35.00 (sd=8.71)
  • Adjusted mean written score = 38.35 (sd=8.71)
  • Mean speaking score = 42.62 (sd=8.82).

Even having adjusted the scores upward, speaking scores were still more than 4 points higher, yet students were similarly happy with them. What I wonder is if the adjustment and some fairly generous grading in places (see last post) caused students to receive better scores than they expected. Sadly the data was collected anonymously, otherwise it would be interesting to see how these plotted against exam scores. Maybe students just have lower expectations – whatever the explanation this is a very strange phenomenon which might merit further investigation.

The second measure in the feeling category was about the effect on student confidence:

  • The WRITTEN exam made me feel more confident about my English (mean=3.38, sd=0.97)
  • The SPEAKING exam made me feel more confident about my English (mean=3.92, sd=0.89)

These results were much more predictable, though it’s perhaps a little odd that on the written exam where just one student got an A, many students still felt that it improved their confidence. Still, the written exam wasn’t an entirely positive result and rightly so. One thing that springs to mind is that I didn’t actually issue letter grades in this exam. Perhaps I should have done in order to give students a better idea of what I thought about their performance.

Reasons and effects

The second thing that I wanted to know was why students received their scores and how students might respond to their results. I devised three questions for each exam which tried to get to the amount of preparation had done for the exam and the amount of effort they expended generally. With the responses included, these were:

  • I studied more than one hour for the WRITTEN exam (mean=3.33, sd=1.31)
  • I used Anki (or another similar app) often this semester (mean=2.56, sd=1.08)
  • I take careful notes of new language from the board (mean 3.69, sd=1.01)
  • I practiced more than one hour for the SPEAKING exam (mean=3.33, sd=1.16)
  • I have done English Cafe 3 times or more this semester (mean=3.56, sd=1.48)
  • I try to practise English speaking outside of class or English Cafe (mean=3.35, sd=1.05)

The first three items generally relate to the written exam and recommended behaviours. The second three relate to speaking exam and out of class practice (English cafe is the optional conversation slots that students can sign up for with teachers). The first set suggest that students didn’t prepare a great deal for their written exam, either in the period immediately before it or during the half-semester using the spaced repetition system that I recommended. The response to the third questions rings true to what I see in class, namely that notes are diligently taken, and the scores further my suspicions that these notes are then promptly forgotten as soon as students get out of the door. From these results, I clearly need to think more about how to get students to maintain the language that we encounter in class, but that’s for another post.

For the speaking exam, again preparation is reasonably low, though interpreting these results more than half of the students spent more than an hour on it, and a surprising number claim that they try to practise English outside of class. I’d be interested to know what form this practice takes.

To survey the effects of the exam, I asked some similar questions:

  • Because of my WRITTEN exam score, I will try to use Anki (or another similar app) (mean=3.28, sd=1.04)
  • Because of my WRITTEN exam score, I will try to take better notes in class (mean=4.10, sd=0.70)
  • Because of my SPEAKING exam score, I will try to practise speaking more outside class (mean=4.05, sd=0.85)

These results for the written exam are rather interesting, in that it’s the behaviour that students already consider that they do well that they also consider needs improving (though I suppose that I can’t rule out the chance that it was the students who responded negatively to the previous question about note taking who are responding positively to this one). What I am potentially getting into here is the difficulty of changing ingrained practices –  a lack of genuine engagement with language and perhaps an over-reliance on cramming rather than long-term learning. This was what I’d hoped to combat by using the app, as well as allowing myself to do a lot more work with lexis. Here, however, the student choice seems to be for the path of least resistance. While there is a chance that the app I recommended simply doesn’t fit well with the students, I see this as indicative of an underlying culture of shallow and temporary learning that I would like to do my best to change.

Was it a fair test?

Perhaps the main motivation for writing my last post was the fear that as an examiner I was letting my students down, and causing their scores to be lower than they actually deserved. I’d hate for the trust I have built up with these groups to be damaged by a poorly written exam. The following questions were , therefore, an attempt to see how students evaluated the exam and their own performance.

  • I thought that the WRITTEN exam was a fair test of class content (mean=4.27, sd=0.86)
  • If I had studied harder, I could have got a higher score on the WRITTEN exam. (mean=4.46, sd=0.75)
  • I thought that the SPEAKING exam was a fair test of class content (mean=4.42, sd=0.82)

These are a pleasing set of results for my peace of mind. These are some of the highest mean scores, so at least in the students’ minds (much more so than mine) I am a fair/competent examiner. The second question also shows that they tend to attribute their low scores to their own effort rather than deficiencies in the exam. This might be reflective of a less critical view of exams, however. For each exam, only two students disagreed that it was a fair test.  Still, the largely positive response suggests that I haven’t irreparably damaged my relationship with the group. This in no way excuses me from making improvements though.

Improvements for future exams

Finally, I wanted to know how students thought that I could improve the exam. I also wanted their view on my idea that the exam could feature slightly extended writing pieces in order to get away from the kind of half-open questions that plagued this exam.

  • I would prefer more extended writing/communciation in the WRITTEN exam, and less vocabulary and grammar questions. (mean=4.26, sd=1.06)

While there’s a bit of variation in answers here, students seem to be more positive than negative about this. I’m undergoing a bit of a shift in thinking about writing at the moment anyway, and trying to include a few more writing assignments in class, so my next exam could/should include a writing section.

Finally I included an open field for students to suggest improvements to the written and spoken exams. Suggestions included less grammar (funny as there really wasn’t much – my students perhaps view grammar differently to how I do), and there were comments that the listening section was too heavily weighted (which I might agree with) and that the questions started very suddenly (an easy fix). One student picked up on the fact that the written questions were too open, and another claimed that he couldn’t see the pictures well.

Speaking-wise there wasn’t much of interest except for a request to see the time, which I will definitely try to organize for the next exam.

Reflection on Reflection

All in all I’m reasonably happy with the way that this went. I learned a lot from it, and I hope it also gave the students a sense of agency in deciding how they are examined. I also hope that doing the survey helped students to reflect on their own behaviour, attribute their successes and failures to the right reasons and hopefully do something differently next time. As for what I might do differently again, the one change that springs to mind is to try to collect feedback with names – it would be very interesting to see how responses correlated with actual exam scores, and also to do this for individual classes rather than all of my students as a group.

Final Word

Thanks very much for reading if you got this far. If you’d like to try this yourself, please feel free to use the Google Form linked above for your own investigations, and if there’s anything you’d like to chat about please do leave a comment below.  If you do try something like this, I’d be very keen to know how it turned out.

Cheers,

Alex

What I’m going to think about the next time I write an exam

Usually proctoring (or invigilating in UK English) written exams at my university is a somewhat trying experience. Trying because I sit at the front of the classroom for over an hour in silence  punctuated only by the frustrated sighing of my students. Looking out I see a sea of furrowed brows, scratched heads and, occasionally, expressions of total mental capitulation.  The reasons for this are twofold. Firstly, students have quite often prioritized other studies (possibly including studying the effects of drinking and computer games on exam scores) over English and therefore aren’t especially well prepared for the exam. It’s important for me to recognize this as an examiner and to accept that I can’t write an exam that pleases everyone, especially those who don’t bother to prepare. However, the second reason for the atmosphere of general malaise in the exam room is that I am still far from a good writer of exams, and this is something that I would like to improve. This post will be a slightly self indulgent one (aren’t they all?) in which I have a look at what I did and what I can do better. I’m going to come back to this each time I write an exam to remind myself, and I’m putting it out there in case there’s anything to be learned from it for others.

Let’s start with the specifics. The worst question that I wrote on this exam (about a very common mistake) went like this:

Correct (수정) the underlined word in the sentence (1 point) and write it again on the line below using different language, but keeping the same meaning. (1 point)
3. I’m going on a date. I bought new shoes and jeans to look gentle.

This is fine as far as the first ‘(1 point)’, but then gets very confusing. So much so, in fact, that when grading the exam I misunderstood my own instructions and only marked the first part of the question and not the second. I was confused by students offering different versions of both of the word and sentence. There are two main problems here. The first is that the pronoun ‘it’ in the instructions could refer to either the word or the sentence, and here it’s more likely referring to the word. Largely this is just crap writing on my part, but it does also point to a wider issue that pronouns are an area of confusion for low level students and something that I should perhaps try to avoid in future.

The second problem here is that the instruction is not particularly clear anyway, especially if you’re reading the sentence on this blog. What I intended in writing the question was to challenge students to use a couple of other ways of expressing cause (“because I wanted to / so I would”), but without some form guidance it relies on students remembering the classroom context, and essentially turns the exam into a game of ‘guess what the teacher wants us to say’, which I would sincerely like my exams, and class in general, not to be. Next time I need to remember that it’s dangerous to rely on classroom context too much, and that anyone sitting down to take my exams should be able to supply the answers from a good knowledge of English.

While reflecting on this exam I wondered whether an example would have helped, but there was only one question of this type on the exam. It’s also very difficult to exemplify something like this without giving the answer away. However, I could have easily supplied a hint in the form of “because” and/or “so” as a prompt.

This is a general pattern in my exam writing. My question prompts tend to be too open, and this probably confuses students and also makes grading more difficult. Take these two examples:

Think of a movie that you saw recently. Write a sentence about parts of the movie. You must use some of the language that we used in class in each sentence.

Respond to these questions and give some helpful extra information.

Again, these are really hard to interpret without classroom context. What’s worse, in the first part, is that it doesn’t even call for successful or interesting use of the loosely defined “language we used in class”, but simply that it be used. This leads to answers like “his facial emotion is emotional”, which I feel like from the instructions deserves at least partial credit as we talked about emotional as a way to describe acting. The second instruction is a little bit better, but still requires much more clarity. What I wanted students to do was answer a yes/no question and supply a little bit more information in order to help the conversation to progress. Again this led to some strange answers that were difficult to grade. I also mixed some questions that followed on from each other with others that didn’t without really specifying which was which, and based following questions on expected answers to previous questions, answers which students didn’t give in some cases, making it impossible to answer the next question. On reflection the whole thing would have been much better set as a discourse completion task – something which would suit the conversation based nature of the class much better anyway.

These problems are symptomatic of a tension between language work and communication work that I often feel both in class and when writing exams. Largely my class is a conversation based one, with the emphasis on just saying something rather than saying something ‘correctly’. Prompts like the two under discussion here are an attempt to mirror that in an exam, but then they have to graded as such, and it’s difficult to know where to draw the line in terms of understanding or interest. Something which might go over fine between two students in conversation can look pretty senseless written down.

Basically, these prompts are me getting caught between assessing communication and assessing language (though I’d accept that there may not be a clear space between them in which to get caught). I either have to go one way or the other into a more open writing prompt with a rubric, or to more language based assessment; I can see plenty of good reasons not to do either. Asking my students to write extendedly in an exam seems unfair if we don’t do any writing in class*. On the other hand, a totally language knowledge based exam doesn’t seem to be in the spirit of the class, might require spending more time in class looking at language, and would probably be even more difficult than this exam was, as a lot of the marks that the students did get came from open prompts.

This I think is the last thing I want to talk about here, which is difficulty and grading. As I mentioned above, this exam was difficult for the students, as the two histograms below hint at.

Exam Result Histograms

On the diagram above, 5 refers to students scoring between 0 and 5 out of a maximum of 50. Bearing in mind that I work on around 90% being an A, and somewhere in the low 80%’s being a B, this exam left nobody getting an A and only 4 of 40 students getting a B. Honestly this is probably a-whole-nother blog post in itself, but clearly something is wrong here. Either the students are not learning what I think they are, or I am not giving them enough time in class to learn the stuff that I think is important, or they’re not learning full stop. When setting exams I’m definitely drawn to learning, and I hate setting questions about things that students should already know, but maybe that’s necessary to move the distribution up a bit. However I need to consider the kind of effect that it might have on students – will these marks give them a bit of a kick up the arse, or will they shatter the confidence that I had done pretty well at building up over the semester? Perhaps it might be a good time to collect some feedback?

I think I’ve got almost as far as I’m going to get with this post, but I’d welcome any thoughts anyone has on this, as I feel like I’ve made a little progress here, but there’s still some way to go. As a final bonus, here’s some other things that I need to think about next time:

  • Using the British “maths” leads to all sorts of subject-verb agreement horrors.
  • Be careful when using “repeat” if I really mean “rephrase”.
  • How important is spelling? Is “claims” an acceptable attempt at “clams” if I tend to de-emphasise the importance of spelling. How about “cramps”?
  • How can I make listening questions more difficult. Could I think about speaking faster or using a different accent?

 Cheers,

Alex

* Although if the rubric assessed students in a similar way to our classwork (eg. content, understandability, interest) I guess it wouldn’t be so bad.

Class constructs: creating my own (part 1)

I blogged previously about the possibility of creating a construct for a short term class in order to keep teaching and testing in line with one another. There is also the advantage that your construct can be shared with students as a form of class goal, and activities can be justified to students in terms of it (especially if they are of the less fun variety). As a brief recap, a construct is a short statement of what you will teach and test, how you will go about it and the expected results and standards. In this post, I will document the first part of the process of creating my own construct.

At the end of the last post I looked at 4 areas that need to be considered in creating a construct. These were:

  • Assessment (& teaching) context (Students, institution, geographical location, purpose and score use and tester).
  • Assessment (& teaching) procedures (What students are expected to do in class and exams)
  • Construct definition (What do you mean by the terms used to describe your class – what is “English”, “Conversation” or “Speaking” for this class?)
  • Models and Frameworks (How can you justify the above with reference to clever people or yourself?)

In this post I will try to outline my thoughts on the first two areas.

Assessment and Teaching Context

A good place to start here is asking who my students are. In my case this also covers a lot of the geographical and institutional factors. Beautiful and unique snowflakes that they of course are, my lot do form quite a usefully homogenous group in two ways. Firstly, they are all Korean and are products of the educational culture here, and secondly they are all students at a polytechnic university. This allows me to make some guiding assumptions:

  • Their English education will have been largely reading and listening focused, and grammar and vocabulary will often have been decontextualized and almost always depersonalized. If they have encountered speaking they have not been especially successful in learning it. I’d venture to say that they have generally learned English as an academic subject rather than a language.
  • They are not taking English as a major, and so they are unlikely to be learning it out of a love for the subject (though this is possible). They are more likely to be learning it out of long-term pragmatic value, but in the short-term their grade is the most important factor. Their future careers are more likely to require practical, rather than perfect, English.

In terms of assessment purpose and score use, one or two things are worth considering. Firstly, I’m aiming to assess achievement not proficiency. In other words, someone who makes a great effort and improves from 0 to intermediate should theoretically score higher than an initially high-intermediate speaker who improves little. Secondly, assessment is not only in terms of exams, but performed continuously over the term through participation, quizzes, projects and 1:1 conversation. The scores have a very narrow use, which is assigning grades for the term. However these grades may dictate scholarships, so it is important that they accurately reflect effort and achievement.

One final consideration is who the assessor is. For the most part it is me, but I do feel that student views should play a part in assessment as well, especially in something as subjective as participation. I think allowing students to play a part in scoring themselves and others also helps to motivate them, as well as keeping complaints down at final grading time.

Assessment and Teaching Procedures

In assessing and teaching the course I want to take the notion of “conversation” as literally as possible.  By this I mean that the aim of the course will be to develop the ability to hold medium length conversations in English on a few topics, and we will learn to do this by having short conversations throughout the course, which will serve as a framework for practicing useful lexis, conversational skills and strategies and a little bit of grammar.

Given this aim it makes sense for the mid-term and final speaking exams to take the form of conversations. This will form the principal drive for the course, and students will be expected to apply what they have learned during class in the exams. The length of the exam is important, as it should be sufficient to pose a real challenge to students (or at least appear to).

Also significant is the number of participants. This is a really interesting question that I am still working on puzzling out. My preference in the past has been for 4 person speaking assessments. I believe that they pose a greater degree of challenge in terms of organizing turns and dealing with multiple inputs. They’re also practically much easier to arrange and going back to the length, I think that a 25 minute 4 person exam sounds more difficult than a 12.5 minute group conversation. The potential downside to this is that a lot of my classwork is done in pairs, though there is nothing to say that I couldn’t up group size over the course of a semester.

Another thing to figure out is the role of written exams. It is institutionally mandated that 50% of my mid-term and final exams is a written paper. What, then, is the role of writing in conversation? Listening might provide some of those marks, perhaps choosing the right answer to a question. The discrimination of similar sounds could also be included.  I also think that common errors that we point out in class should have a role. Finally, vocabulary and lexis in the form of gap fills will be important, as well as subtler shades of meaning that we talked about in class that simply won’t come up in a speaking exam. As far as possible, I would like to avoid grammar transformation exercises and reading passages. 

All of this and I’m only really through talking about final assessments. Ongoing assessments (quizzes and participation scores) should also be generally conversation based, and reflect the effort made to actually have conversations, on the basis that conversational skills cover a wide range of areas, and are probably subject to individual variation. It’s developing an individual ability to have conversations that I am most interested in during this course. Partly this can be taught directly in terms of strategies and language  but partly this is something that you figure out for yourself by getting involved. The course needs to both offer opportunities to do this and reward them when they are taken.

To bring this post to a conclusion, as I am already over my self-imposed 1,000 word guideline, my teaching and assessment aims should be to improve speaking as this is the area in which my students need most improvement. A conversation based approach gives an opportunity for personalizing the language as well as providing a reasonably well defined structure for assessment (see the next post). Conversation must form the basis for ongoing and final assessment of achievement on the course, with an emphasis on fluency and communication skills rather than accuracy (or complexity especially). The ability to deal with small group work is thought to be important, as is the ability to function in English speaking environments for a slightly longer duration.

In the next post I’m going to tackle my description of conversation. I hope you’ll be there to read it. In the meantime if you have comments, questions or suggestions, please leave them below the line.

Cheers,

Alex

Class constructs: an introduction.

16 weeks, roughly 5 hours of class time in each. Throw in a couple of presentations and a magazine making project, as well as exams, entrance tests and university festivals, and it doesn’t leave a lot of time for learning something as large as a language. Nevertheless, we grab our textbooks and have a go – and while we do so we also try and order ourselves for the dishing out of grades or levels. Basically the two problems I imagine that many teachers with some autonomy grapple with: what to teach, and how to assess it. In this post I’m going to set the background for creating class constructs that go some way to tackling this problem.

Construct is a term drawn from assessment literature, and is a more or less a statement of what the test author believes they are testing, how they should test it, and what the results might look like. As an example, a construct for the TOEFL exam would be a definition of the English ability required to take a higher education course, perhaps in terms of vocabulary size, grammatical knowledge, skills (summarizing, note taking), functions, knowledge of genres and many other things. It would also include the kind of tasks that the authors felt would test these, and what acceptable and unacceptable performances looked like. All of this is realized in the test that is actually taken, and the rating scales, scoring and the final grade. Therefore, if you score a full 120 on the TOEFL IBT, you can congratulate yourself on being the embodiment of what ETS (the makers of TOEFL) think academic English is.

“Teaching to the test” gets a bit of a bad rep, especially in Korea where anything that isn’t an academic reading passage is ruthlessly cast aside. It feels a bit dirty to be honest, like you’re being cowed by the man – encouraging your students to chase letter and number grades over actually learning anything useful, or teaching test-taking strategies rather than language. If the test is crap (TOEIC, the Korean university entrance exam) then this is abundantly true, but if the test is good, then surely this can be a good thing (these two situations tend to be called negative and positive washback respectively). For a short course test such as mine, which is aimed at measuring learning, control of the design should play a large part in deciding what should be learned (though we know that this is not an exact science), and so a construct not only defines the construction of a test, but in this case the construction of the whole course.

But why exactly is this useful? Firstly, going back to the opening sentence, time is short, English is not only big but constantly shifting. With hundreds of thousands of words, not to mention fixed phrases, as well as countless combinations of functions, domains of use, registers and skills, pinning English down to something teachable is constant source of frustration and argument in journal articles, blogs and at conferences. General English courses (in the form of books) try to tread the most middling, inoffensive and general line, in order not to upset anyone into not buying them. However, this means they also tend to miss out anything culturally specific, potentially insulting or simply left-field. Having a construct allows you to cut out the irrelevant stuff and focus on what your students (and you!) really want and need. In my case, students can translate about 3000 single words in English, and have a pretty decent reading level. Their grammar is OK if they can write it out first, but spoken interaction is often conducted in single words at the beginning of the course. They also have very little knowledge outside the academic register. I’ve talked a lot about this already (and will again), but safe to say that concentrating on speaking skills almost exclusively is a good  bet.

The second advantage that I can see for developing a construct for the class is that if you want the exam to dictate teaching, you theoretically should write the exam first. The problem of course, is that it’s difficult to write an exam based on content that you haven’t taught yet, especially if your course is based a lot on lexis that arises from what students say, rather than being planned in advance. A construct for the class provides a nice straight ledge for aligning one’s ducks on, and if teaching and testing are conducted with reference to it then the two should reflect and reinforce each other. This hopefully will help me to tackle two problems that I’ve encountered in previous semesters – difficulty in writing exams that accurately reflect what we have done in class, and also the fact that in feedback I tend to score low on questions about students understanding my goals. As an extra idea, there would of course be nothing to stop you designing a construct in collaboration with your students.

So what goes into designing a construct? I’m going to finish this post by examining in a little more detail the kind of thinking that one might need to do, and presenting the questions that might need to be answered. In doing this I’m drawing heavily on the work of Sari Luoma (2004) on speaking assessment, though these considerations could easily be adapted to other assessment concepts.

Assessment Context

A construct links the theoretical with the more concrete (though of course this is still within a context of a test, which itself is often a prediction of how a testee would fare in the real world). Part of this is defining the context of the test – institution, purpose, takers and backgrounds, the tester and the plans for score use. While the theoretical definition for speaking might be the same for young children, teenagers and young adults might be similar, the ways of eliciting speech (task type, topic) will be very different, so context here is extremely important.

Assessment Procedures

A construct should have some indication of the length and frequency of the assessment, as well as the tasks required to elicit it and the methods used to score it. This helps keep things practical (no sense in having hour long one on one speaking tests when you teach 200 students) as well as, in the case of my class constructs, meaning that class activities can mirror testing activities.

Construct Definition

What are you actually going to try to teach and test here? The more specific you can be here the better, so you might want to think about sub-skills, grammatical structures and vocabulary ranges, rather than something general like speaking. You should also consider what a good, average and bad performance might look like in these terms. All of this will help greatly in designing rating scales and rating performances.

Models and Frameworks

What’s even better is if you can relate the thinking above to reading that you’ve done in the area. An example of this might be Hymes’s SPEAKING framework. This gives you a base to work from in terms of teaching and learning.

A Construct Definition

Finally, you should attempt to summarize all of the thinking above into a neat little paragraph like the one below:

The aim of this test (class) is to assess (teach/improve) the examinees’ ability  to express their ideas in English, take their interlocutor’s contributions into account and make use of them in the discussion, and collaborate in the creation of interaction. Social appropriateness is not assessed (taught) explicitly. (Luoma 2004: 121).

So that is roughly what a construct design process looks like. In the next post or two I’m going to have a go at it myself. In the meantime I’d be interested to know your views on whether this is a sensible approach. Are there any downsides to working this way? Am I consigning my students to a life of exam hell? Any argument very much welcomed below the line.

Cheers,

Alex

Reference

Luoma, S. (2004). Assessing speaking. Ernst Klett Sprachen.

Winter Pronunciation Camp Reflections Week 1

So we’re back at the time of year when everything in my life goes mental and I have to stop blogging for a while, as I’m involved with creating my own camp program and teaching it 5 days a week (actually I’m in the middle of teaching 9 days out of 10!) as well as dealing with impending MA deadlines (if anyone suggests doing a module on assessment, I recommend considering your friendship with them immediately). Expect that this winter I’m resolving not to just go quiet for six or seven weeks, but to try and blog about this pronunciation camp as I teach it.

These blog post are going to be somewhat hastily put together I’m afraid. I’ll try to do some more polished reflections after the camp has finished, but for now these will detail what I’m doing and how I’m felling about it. They’re actually based on my experimental audio reflections which I’ve been keeping daily, and are so far proving extremely helpful. Anyway, here goes week one…

Wednesday

This was a bit of a get to know you and introduce the course sort of day. I started off with my favourite name game, in which one person starts by introducing themselves and saying one thing about themselves, then each successive person must remember and introduce everyone else before they introduce themselves. While it’s not particularly exciting or innovative, it does involve maximum use of students names, and is a big part of the fact that at the end of week one I know everyone’s name already.

Then as a get to know you activity we wrote five answers about ourselves and mingled trying to guess what the question was (idea stolen from this excellent thread). It was interesting to see how different groups interpreted this – some were very keen on guessing the questions, others used it as a much more general basis for a conversation. I was happy either way, and it seemed to generate a lot of activity in every group. It also worked well for me to meet students on a one to one level, and is possibly leads to the development of more rapport, as the encounters are much more personal.

I then shared my four key goals for the course, which are:

  1. Students will learn how to improve their pronunciation?
  2. Students will increase their receptive and productive pronunciation power (!)
  3. Students will increase, an be able to measure increases in, their fluency.
  4. Students will be more aware of the learning process through reflective journalling.

Having explained these a little, we then moved onto a bit of reflection. I posed three questions to the students to talk about, then write a short journal entry for homework. These were about the importance of pronunciation, the students experience of and ideas for pronunciation teaching, and their thoughts on their own pronunciation.

One thing that arose during this, is the level of monitoring I should do during reflective conversations. I made sure that the students were aware that their reflections were their own, and there was no need to share them, so it felt a little off to try to monitor reflective conversations like this. I think from next time I’ll make a clear policy that I won’t monitor unless invited to.

Thursday

On Thursday I introduced students to one of my processes for the camp, what I called “The Learning Cycle”, but which is neither cyclic nor especially about learning. It might be better termed “the habit-changing process”. Anyway, it essentially uses the following four stages:

  • Discovery (of undesireable habits)
  • Correction (finding out how to do it right)
  • Goal directed, attention focused practice
  • Good habit formation

The idea is to raise habits out of unconscious production, change them, then reintegrate the new improved habit. Today’s calls focused on the discovery stage through a diagnostic test (designed by David Kim and published in the 1999 KOTESOL PAC Proceedings). Running a diagnostic test for many students simultaneously is tricky, but I came up with a way to do it. I gave the students 15 minutes to read the test sentences and record them on their phone. I gave them the sentences plain so there wasn’t too much second guessing. Then together we analysed them in class, with me providing both good and (hilariously to the students) bad models. I think this was actually a pretty good way to do it, as it allows many problems to be pointed out in a short space of time, and hopefully develops better listening skills and pronunciation awareness in general. However, the odd check I have done since shows students marking themselves very harshly, despite encouragement, so if you’re doing this keep an eye out for that.

Friday

Friday was perhaps the least satisfying day of the week, for reasons that I will outline. I’m trying to give students the tools to improve their pronunciation, so today turned into a bit of an information dump. I gave students British and Korean vowel quadrilaterals, plus my own consonant chart, and explained how to use them. Now, this would be incredibly useful to me, but the students just didn’t seem to “get it”. Having set some goals for improvement  for the test, I gave them the task of learning how to produce the sounds correctly using the materials I had just given them, and then building these into a set of common words that they could make into an Anki deck to do deliberate practice with (I also introduced howjsay and forvo to provide some models).

This was disappointingly badly done, and left me feeling like I hadn’t really conveyed my point (in truth I hadn’t really). The effort that’s required to change a feature of pronunciation, especially one that’s ingrained in a young adult, is significant, and I’m not sure that the students realize that it takes daily focus (in fact, on a show of hands most seemed to believe that having mastered a sound they would then produce it correctly each time). Anyway, my solution is going to be leaving it for a couple of days, but then conducting a proper practice session, carefully staged, with theories elaborated and Anki decks built during the class, and revision assigned for homework. This will likely be a new way of working for the students, so it’s important to model it carefully for them, rather than just expect them to do it straight off the bat. I’m hoping for better things next week.

Saturday [ 😦 ]

Due to the new year holiday we ended up with a make up class on the Saturday of this week. I suspected that students wouldn’t be particularly up for long lectures, intensive pronunciation focus or anything else resembling hard work (I also suspected the teacher wasn’t really up for this either) . I’d introduced my students to IPA in a homework assignment, but wanted to help them learn it. When I learned, the thing that helped me most was actually using the thing, so with this in mind I designed a scavenger hunt around the building (inside, it was -10C out) with all of the clues written in IPA. The clues also had to be earned by correctly pronouncing words like “epitome” (thanks Mike) having found the pronunciation in a dictionary.

Overall the event was a great success, and got rave reviews from the students. Honestly I saw it more as a bit of light relief after a hard week, but I’m sure the students did get a bit more familiar with the IPA. However, one incident did lead me to doubt the usefulness of teaching IPA. When I set the key words for the clues, one student immediately looked up the voice sample pronunciations on his phone. Smart enough, and it left me wondering whether IPA might be a little less useful these days than in days of yore.

The final thing I want to share was my opening activity for the day. Lifted from Nation and Macalister’s (2009) Language Curriculum Design, I started a fluency tracker for my students. Basically it’s just a graph of their fluencies in various areas. We started with reading fluency. I allowed each student to choose a graded reader, and set them 4 minutes of reading time. They then counted the lines that they had read. We’re going to test this regularly to see if they improve, and also look at writing and speaking fluency too. One thing that ER fans should note is that several students asked to borrow the books to read at home!

Alright, that’s this week’s rambling done I think. Any suggestions, comments and criticisms very much welcomed.

Cheers,

Alex