StudyFiWiki
WikiWeb app
StudyFi

AI study materials for every student. Summaries, flashcards, tests, podcasts and mindmaps.

Study materials

  • Wiki
  • Web app
  • Sign up for free
  • About StudyFi

Legal

  • Terms of service
  • GDPR
  • Contact
Download on
App Store
Download on
Google Play
© 2026 StudyFi s.r.o.Built with AI for students
Wiki📚 English Language TeachingTeaching Speaking Skills in EFLSummary

Summary of Teaching Speaking Skills in EFL

Teaching Speaking Skills in EFL: A Comprehensive Guide

SummaryKnowledge testFlashcardsPodcastMindmap

Introduction

Assessment is a vital part of any course: it measures what learners know and can do, guides teaching priorities, and motivates students. This material focuses on practical issues when an assessment includes an oral (spoken) component and how teachers can design and apply reliable oral-testing procedures while minimising disruption.

Definition: Oral component — a part of an examination or test that requires learners to produce spoken language as evidence of their ability.

Why oral assessment is different

Oral testing differs from written testing in two main ways:

Practical complexity

  • Written grammar tests are relatively easy to set, distribute and mark.
  • Oral testing usually requires individual interaction between tester and learner, which takes much more time and can disrupt normal class routines.
  • If every student must be interviewed, timetabling and logistics become serious concerns.

Definition: Practical complexity — the logistical and time-related difficulties involved in administering a test.

Reliability and consistency of marking

  • Different testers may use different criteria or apply the same criteria inconsistently.
  • Judging spoken performance can be more subjective than judging grammar or written answers.
  • Clear, agreed assessment criteria are essential to improve inter-rater reliability.

Definition: Reliability — the degree to which assessment results are consistent across different raters and occasions.

When oral testing is essential

  • Some public exams include an oral component (for example, internationally recognised language tests). If learners plan to take such exams, their course assessment should reflect that format.

Definition: Washback effect — the influence that the nature of an exam has on teaching and learning; what is tested tends to be taught.

💡 Věděli jste?Did you know that including an oral component in a final exam can increase classroom speaking activity because teachers and students focus on what will be tested?

Making oral testing workable in courses

Although oral assessment is challenging, it can and should be included when appropriate. The activities used for testing often match those used for practising; they do not need to be disruptive if planned carefully.

Practical strategies

  1. Schedule and plan
    • Allocate specific days or blocks for oral testing to minimise disruption.
    • Use small groups where possible to reduce total testing time.
  2. Standardise procedures
    • Produce a clear rubric with descriptors for levels of performance.
    • Train all raters on the rubric and do calibration sessions where raters score the same sample performances and discuss differences.
  3. Use recordings
    • Record oral performances so multiple raters can assess the same sample and moderation is possible.
  4. Integrate testing with classroom activities
    • Use the same tasks for practice and assessment to familiarise students and reduce test anxiety.

Definition: Rubric — a scoring guide that lists criteria and describes levels of performance for each criterion.

Types of spoken tests (examples and how they differ)

Test typeWhat it involvesPractical notes
InterviewsOne-to-one question-and-answer sessionFlexible, good for probing, time-consuming
Live monologuesLearner speaks alone for a set time on a topicControlled, easy to time, requires clear task prompts
Recorded monologuesLearner records speech for later markingAllows moderation and repeated listening, needs recording equipment
Role-playsLearners perform set roles interacting with othersCan test specific functions/skills, requires careful task design
Collaborative tasks and discussionsInteraction between learners to complete a taskAssesses interaction, may be harder to isolate individual performance
💡 Věděli jste?Fun fact: Role-plays can reveal learners ability to use language functions (e.g. apologising, request
Zaregistruj se pro celé shrnutí
FlashcardsKnowledge testSummaryPodcastMindmap
Start for free

Already have an account? Sign in

Oral Assessment Challenges

Klíčová slova: Speaking, Communication Strategies, Language Teaching, Assessment

Klíčové pojmy: Oral assessment requires more time and planning than written tests, Different raters can judge speaking inconsistently without clear rubrics, Use recordings to allow moderation and improve reliability, Design rubrics with clear descriptors for pronunciation, fluency, accuracy, task achievement, Schedule oral tests in blocks to minimise class disruption, Match test tasks to classroom practice to reduce test anxiety, Run rater calibration sessions before marking begins, Choose the appropriate test type: interview, monologue, role-play, recorded, or collaborative, Integrate practice and assessment tasks to exploit washback positively, Provide students with clear instructions and practice opportunities before testing

## Introduction Assessment is a vital part of any course: it measures what learners know and can do, guides teaching priorities, and motivates students. This material focuses on practical issues when an assessment includes an oral (spoken) component and how teachers can design and apply reliable oral-testing procedures while minimising disruption. > **Definition:** Oral component — a part of an examination or test that requires learners to produce spoken language as evidence of their ability. ## Why oral assessment is different Oral testing differs from written testing in two main ways: ### Practical complexity - Written grammar tests are relatively easy to set, distribute and mark. - Oral testing usually requires individual interaction between tester and learner, which takes much more time and can disrupt normal class routines. - If every student must be interviewed, timetabling and logistics become serious concerns. > **Definition:** Practical complexity — the logistical and time-related difficulties involved in administering a test. ### Reliability and consistency of marking - Different testers may use different criteria or apply the same criteria inconsistently. - Judging spoken performance can be more subjective than judging grammar or written answers. - Clear, agreed assessment criteria are essential to improve inter-rater reliability. > **Definition:** Reliability — the degree to which assessment results are consistent across different raters and occasions. ## When oral testing is essential - Some public exams include an oral component (for example, internationally recognised language tests). If learners plan to take such exams, their course assessment should reflect that format. > **Definition:** Washback effect — the influence that the nature of an exam has on teaching and learning; what is tested tends to be taught. Did you know that including an oral component in a final exam can increase classroom speaking activity because teachers and students focus on what will be tested? ## Making oral testing workable in courses Although oral assessment is challenging, it can and should be included when appropriate. The activities used for testing often match those used for practising; they do not need to be disruptive if planned carefully. ### Practical strategies 1. Schedule and plan - Allocate specific days or blocks for oral testing to minimise disruption. - Use small groups where possible to reduce total testing time. 2. Standardise procedures - Produce a clear rubric with descriptors for levels of performance. - Train all raters on the rubric and do calibration sessions where raters score the same sample performances and discuss differences. 3. Use recordings - Record oral performances so multiple raters can assess the same sample and moderation is possible. 4. Integrate testing with classroom activities - Use the same tasks for practice and assessment to familiarise students and reduce test anxiety. > **Definition:** Rubric — a scoring guide that lists criteria and describes levels of performance for each criterion. ## Types of spoken tests (examples and how they differ) | Test type | What it involves | Practical notes | | --- | --- | --- | | Interviews | One-to-one question-and-answer session | Flexible, good for probing, time-consuming | | Live monologues | Learner speaks alone for a set time on a topic | Controlled, easy to time, requires clear task prompts | | Recorded monologues | Learner records speech for later marking | Allows moderation and repeated listening, needs recording equipment | | Role-plays | Learners perform set roles interacting with others | Can test specific functions/skills, requires careful task design | | Collaborative tasks and discussions | Interaction between learners to complete a task | Assesses interaction, may be harder to isolate individual performance | Fun fact: Role-plays can reveal learners ability to use language functions (e.g. apologising, request

Other materials

SummaryKnowledge testFlashcardsPodcastMindmap
← Back to topic