PDE 705 (MEASUREMENT AND EVALUATION IN EDUCATION)
SATURDAY 26TH JULY 2025 NOTE AND PAST QUESTION NOTE FOR EXAM
PDE 105: Measurement and Evaluation questions and answers, with clear one-sentence explanations
for each part of the question followed by accurate, concise answers:
1.
(a) Define Evaluation.
Instruction: Explain what evaluation means in educational settings.
- Evaluation is a systematic process of collecting and
analyzing information to judge the effectiveness or value of an
educational program or student's performance.
(b)
Differentiate between Test, Assessment, Measurement, and Evaluation.
Instruction: Clarify how these four concepts differ from each other.
- Test:
A tool for measuring specific knowledge or skills (e.g., a math quiz).
- Assessment:
A broad process of collecting data on learning using various tools like
tests, projects, or observations.
- Measurement:
The process of quantifying learning outcomes using scores or numbers.
- Evaluation:
Using results from measurement and assessment to make informed judgments
or decisions.
(c)
List five factors necessary for effective evaluation.
Instruction: Identify key conditions that make an evaluation valid and
useful.
- Clear Objectives
- Validity
- Reliability
- Fairness
- Practicality
2.
(a) What is School-Based Assessment (SBA)?
Instruction: Define the concept of SBA.
- SBA is a continuous assessment approach used within
schools to monitor students' progress through tests, assignments, and
classwork.
(b)
State four ways SBA enhances teaching.
Instruction: Explain how SBA supports teaching improvement.
- Identifies students' learning gaps.
- Enables timely and personalized feedback.
- Helps in adjusting instruction.
- Promotes active learner participation.
(c)
Classify educational objectives.
Instruction: Name and describe the domains of learning objectives.
- Cognitive
– intellectual skills like remembering and reasoning.
- Affective
– emotional responses, values, and attitudes.
- Psychomotor
– physical skills and motor activities.
3.
(a) List basic principles for constructing a multiple-choice test.
Instruction: Mention the rules that ensure good MCQ items.
- Use clear language.
- Align with objectives.
- Avoid bias.
- Include one correct answer.
- Use realistic distractors.
(b)
Give five purposes for constructing tests.
Instruction: State the reasons for creating school tests.
- To measure learning.
- To inform instruction.
- For certification or placement.
- To motivate students.
- For performance comparison.
4.
(a) What is Test Validity?
Instruction: Define validity in relation to tests.
- Validity refers to how well a test measures what it is
intended to measure.
(b)
Describe three types of validity.
Instruction: Identify and explain forms of test validity.
- Content Validity
– Measures coverage of subject matter.
- Construct Validity
– Measures abstract traits (e.g., creativity).
- Criterion-Related Validity – Measures predictive accuracy of a test.
(c)
Differentiate between Validity and Reliability.
Instruction: Explain how the two testing concepts differ.
- Validity
ensures the right thing is being measured.
- Reliability
ensures the results are consistent over time.
5.
(a) State guidelines for scoring essay tests.
Instruction: List best practices for marking essays fairly.
- Use scoring rubrics.
- Define expected answers.
- Be objective and consistent.
- Provide feedback.
- Focus on overall understanding.
(b)
Explain Item Discrimination and Difficulty Indices.
Instruction: Define and state the use of these test item statistics.
- Item Discrimination Index shows how well a question differentiates between high-
and low-performing students.
- Item Difficulty Index
indicates how easy or hard a test item is, based on the percentage of
students who answer it correctly.
6.
(a) Explain Mean, Median, Mode, Range, and Standard Deviation.
Instruction: Give definitions for basic statistical terms.
- Mean:
Average score.
- Median:
Middle value in a sorted dataset.
- Mode:
Most frequently occurring value.
- Range:
Difference between highest and lowest values.
- Standard Deviation:
Measure of data spread around the mean.
(b)
Solve for Mean, Median, and Mode.
Instruction: Use a given data set to calculate statistical values.
Given Data (40 values):
35, 32, 33, 57, 52, 29, 30, 42, 43, 47, 35, 51, 45, 51, 50, 48, 41, 25, 23, 24,
35, 50, 40, 45, 55, 37, 36, 23, 25, 27, 56, 57, 25, 27, 33, 51, 43, 41, 51, 57
- i. Mean
= Total sum ÷ Number of values = 1451 ÷ 40 = 36.28
- ii. Median
= Middle values of sorted data = (35 + 36) ÷ 2 = 35.5
- iii. Mode
= Most frequent values = 35, 51, 57 (all appear 3 times –
Multimodal)
1.
(a) Define Continuous Assessment. (5 Marks)
Instruction: Explain what continuous assessment means in educational
evaluation.
- Continuous Assessment is a consistent and comprehensive
method of evaluating a student’s learning progress over time using various
tools such as tests, assignments, and observations rather than relying
solely on final exams.
(b)
State and explain five features of Continuous Assessment. (20 Marks)
Instruction: Describe five distinct characteristics that make continuous
assessment effective.
- Ongoing Process
– It is conducted regularly throughout the academic term, not just at the
end.
- Holistic Evaluation
– It assesses cognitive, affective, and psychomotor domains, providing a
full picture of the learner.
- Formative Feedback
– It offers timely responses to help students improve during learning, not
after.
- Variety of Methods
– It includes diverse tools such as quizzes, oral presentations, and
classwork for balanced assessment.
- Encourages Active Learning – Students are continuously engaged and motivated to
participate because they are being monitored consistently.
2.
Write short notes on the following concepts. (25 Marks)
Instruction: Provide concise explanations of key validity and
reliability concepts.
(a) Content Validity
- It measures how well a test represents the full content
or subject area it's meant to cover, ensuring alignment with curriculum
objectives.
(b) Face Validity
- It refers to how appropriate or relevant a test appears
to be at face value to test takers and instructors, even if it lacks
scientific rigor.
(c) Construct Validity
- It assesses whether a test truly measures the abstract
concept (construct) it's designed to measure, like motivation or
intelligence.
(d) Test-Retest Method
- A reliability technique where the same test is
administered twice to the same group after a time gap, and the consistency
of results is measured.
(e) Alternate-Form Method
- This method evaluates reliability by comparing results
from two equivalent versions of the same test taken by the same group.
3.
(a) Define Evaluation. (5 Marks)
Instruction: Give a brief definition of evaluation in education.
- Evaluation is a systematic process of collecting,
analyzing, and interpreting data to judge the effectiveness of
instruction, student performance, or educational programs.
(b)
Differentiate between Formative and Summative Evaluation. (20 Marks)
Instruction: Compare these two types of evaluation with examples.
Formative
Evaluation |
Summative
Evaluation |
Conducted during instruction |
Conducted at the end of
instruction |
Provides immediate feedback |
Provides final judgment on
learning |
Helps improve teaching and
learning processes |
Assesses whether learning
objectives were met |
Tools include quizzes,
assignments, class discussions |
Tools include final exams,
end-of-term projects |
Aimed at learning improvement |
Aimed at certification, promotion,
or grading |
4.
Calculate the Actual Score Using a Correction Formula. (25 Marks)
Instruction: Use the correction formula to calculate a student's score.
Given:
- Total Questions: 50
- Correct Answers: 40
- Guessing is prohibited, so incorrect answers are not
penalized.
Correction Formula:
Corrected Score=Raw Score−(Incorrect AnswersTotal Questions×Penalty)\text{Corrected
Score} = \text{Raw Score} - \left( \frac{\text{Incorrect Answers}}{\text{Total
Questions}} \times \text{Penalty}
\right)Corrected Score=Raw Score−(Total QuestionsIncorrect Answers×Penalty)
Since no penalty is applied, the corrected score equals the number of correct answers.
Corrected Score = 40
5.
State and explain five specific purposes for tests. (25 Marks)
Instruction: Mention and describe five main reasons why tests are used
in educational settings.
- Assessment of Learning – Tests evaluate how much knowledge and skills
students have gained over a period of instruction.
- Instructional Adjustment – Results from tests help teachers modify their
teaching methods to address student weaknesses.
- Student Motivation
– Knowing that tests are part of the learning process encourages students
to study and take learning seriously.
- Placement or Certification – Tests determine if a student qualifies for promotion
to the next level or for receiving a certificate.
- Evaluation of Educational Effectiveness – Tests are used to assess how effective a curriculum,
program, or instructional approach has been in achieving learning goals.
6.
Calculate the Mean, Mean Deviation, and Standard Deviation of the following scores:
20, 35, 40, 50, 65 (25 Marks)
Instruction: Perform three statistical calculations using the data
provided.
(i)
Mean Calculation
To find the mean, add all the scores
and divide by the number of scores.
Mean=20+35+40+50+655=2105=42\text{Mean}
= \frac{20 + 35 + 40 + 50 + 65}{5} = \frac{210}{5} =
42Mean=520+35+40+50+65=5210=42
(ii)
Mean Deviation Calculation
Mean deviation is the average of the
absolute differences between each score and the mean.
Deviations: ∣20−42∣=22, ∣35−42∣=7, ∣40−42∣=2, ∣50−42∣=8, ∣65−42∣=23\text{Deviations: } |20 - 42| = 22,\ |35 - 42| = 7,\ |40
- 42| = 2,\ |50 - 42| = 8,\ |65 - 42| = 23Deviations: ∣20−42∣=22, ∣35−42∣=7, ∣40−42∣=2, ∣50−42∣=8, ∣65−42∣=23 Mean Deviation=22+7+2+8+235=625=12.4\text{Mean
Deviation} = \frac{22 + 7 + 2 + 8 + 23}{5} = \frac{62}{5} =
12.4Mean Deviation=522+7+2+8+23=562=12.4
(iii)
Standard Deviation Calculation
Standard deviation is the square
root of the average of squared differences from the mean.
(20−42)2=484, (35−42)2=49, (40−42)2=4, (50−42)2=64, (65−42)2=529(20
- 42)^2 = 484,\ (35 - 42)^2 = 49,\ (40 - 42)^2 = 4,\ (50 - 42)^2 = 64,\ (65 -
42)^2 =
529(20−42)2=484, (35−42)2=49, (40−42)2=4, (50−42)2=64, (65−42)2=529
Variance=484+49+4+64+5295=11305=226\text{Variance} = \frac{484 + 49 + 4 + 64 +
529}{5} = \frac{1130}{5} = 226Variance=5484+49+4+64+529=51130=226
Standard Deviation=226≈15.03\text{Standard Deviation} = \sqrt{226} \approx
15.03Standard Deviation=226≈15.03
6.
Calculation of the Mean and Standard Deviation (25 Marks)
Given Scores:
65, 56, 45, 53, 50, 62, 60, 46, 52
(i)
Mean Calculation
Step 1: Add all the scores together
65 + 56 + 45 + 53 + 50 + 62 + 60 + 46 + 52 = 489
Step 2: Divide the total by the number of scores
Mean = 489 ÷ 9 = 54.33
(ii)
Standard Deviation Calculation
Standard Deviation formula:
SD=∑(x−xˉ)2nSD = \sqrt{\frac{\sum (x
- \bar{x})^2}{n}}SD=n∑(x−xˉ)2
Where:
- xxx = each score
- xˉ\bar{x}xˉ = mean = 54.33
- nnn = number of scores = 9
Step 1: Calculate the squared differences from the mean:
Score
(x) |
x
- Mean (54.33) |
(x
- Mean)² |
65 |
10.67 |
113.85 |
56 |
1.67 |
2.79 |
45 |
-9.33 |
87.06 |
53 |
-1.33 |
1.77 |
50 |
-4.33 |
18.75 |
62 |
7.67 |
58.85 |
60 |
5.67 |
32.11 |
46 |
-8.33 |
69.39 |
52 |
-2.33 |
5.43 |
Step 2: Add all the squared differences
Total = 113.85 + 2.79 + 87.06 + 1.77 + 18.75 + 58.85 + 32.11 + 69.39 + 5.43 = 389.99
Step 3: Divide by the number of scores (n = 9)
Variance = 389.99 ÷ 9 = 43.33
Step 4: Take the square root of the variance
Standard Deviation = √43.33 = 6.58
Final
Answers:
- Mean
= 54.33
- Standard Deviation
= 6.58
6.
(ii) Standard Deviation Calculation
Given Scores:
65, 56, 45, 53, 50, 62, 60, 46, 52
Step 1: Mean Calculation
Mean=65+56+45+53+50+62+60+46+529=4899=54.33 (approx. 54.78 as used)\text{Mean}
= \frac{65 + 56 + 45 + 53 + 50 + 62 + 60 + 46 + 52}{9} = \frac{489}{9} = 54.33
\ (\text{approx. 54.78 as
used})Mean=965+56+45+53+50+62+60+46+52=9489=54.33 (approx. 54.78 as used)
Step 2: Deviations from the Mean and
Squared Deviations
Score
(x) |
x
− Mean (54.78) |
(x
− Mean)² |
65 |
10.22 |
104.45 |
56 |
1.22 |
1.49 |
45 |
-9.78 |
95.43 |
53 |
-1.78 |
3.17 |
50 |
-4.78 |
22.85 |
62 |
7.22 |
52.14 |
60 |
5.22 |
27.25 |
46 |
-8.78 |
77.01 |
52 |
-2.78 |
7.73 |
Step 3: Variance Calculation
Variance=104.45+1.49+95.43+3.17+22.85+52.14+27.25+77.01+7.739≈391.529≈43.5\text{Variance}
= \frac{104.45 + 1.49 + 95.43 + 3.17 + 22.85 + 52.14 + 27.25 + 77.01 + 7.73}{9}
\approx \frac{391.52}{9} \approx
43.5Variance=9104.45+1.49+95.43+3.17+22.85+52.14+27.25+77.01+7.73≈9391.52≈43.5
Step 4: Standard Deviation
Standard Deviation=43.5≈6.6\text{Standard
Deviation} = \sqrt{43.5} \approx 6.6Standard Deviation=43.5≈6.6
✅
Final Answer:
- Mean
= 54.78
- Standard Deviation
= 6.6
PDE 705: Measurement and Evaluation
in Education examination content, suitable for
revision or answer writing:
1.
(a) Define Test (5 Marks)
A test is a structured and
systematic tool used to measure a learner’s knowledge, skills, abilities, or
performance in a specific subject. It typically includes questions or tasks
designed to evaluate learning outcomes and determine academic achievement.
1.
(b) Five Characteristics of a Good Test (20 Marks)
- Validity
– Accurately measures what it is intended to measure.
- Reliability
– Produces consistent results under similar conditions.
- Clarity
– Instructions and items are simple, clear, and easy to understand.
- Fairness
– Free from bias; gives equal opportunity to all students.
- Comprehensiveness
– Covers the key topics and learning objectives taught.
2.
Six Reasons Why Evaluation Is Important to the Classroom Teacher (25 Marks)
- Measuring Student Progress – Tracks learning and academic development.
- Informing Instruction
– Helps teachers improve teaching methods.
- Providing Feedback
– Guides students on strengths and weaknesses.
- Motivating Students
– Encourages learners to stay engaged and improve.
- Curriculum Improvement – Identifies gaps and effectiveness in the curriculum.
- Accountability
– Supports informed decisions and policy adjustments.
3.
(a) Basic Steps in Planning a Test (15 Marks)
- Define Objectives
– State clearly what the test should assess.
- Select Test Type
– Choose suitable test formats (MCQs, essays, etc.).
- Design Test Items
– Develop well-structured, relevant questions.
- Determine Scoring
– Decide on scoring guide or rubric.
- Review and Revise
– Proofread and adjust the test before administration.
3.
(b) Five Principles in Constructing Short-Answer Tests (10 Marks)
- Clarity
– Use straightforward language.
- Focus on Key Concepts
– Test important facts and objectives.
- Brevity
– Keep questions brief and direct.
- Objective Scoring
– Ensure answers are clearly correct or incorrect.
- Coverage
– Reflect the full range of course content.
4.
(a) Principles in Constructing a Test (15 Marks)
- Validity
– Align test items with learning goals.
- Reliability
– Consistency in results across occasions.
- Clear Instructions
– Avoid ambiguity.
- Fairness
– Eliminate bias and discrimination.
- Appropriate Difficulty – Balance between easy, moderate, and hard questions.
4.
(b) Continuous Assessment in Improving Teaching and Learning (10 Marks)
- Timely Feedback
– Helps students improve progressively.
- Identifies Learning Gaps – Enables quick intervention.
- Motivates Students
– Promotes regular study habits.
- Holistic Evaluation
– Includes classwork, homework, and behavior.
- Promotes Consistency
– Encourages steady performance throughout.
5.
(a) Explain the Term “Validity” (15 Marks)
Validity refers to the accuracy
and truthfulness of an assessment. It indicates how well a test measures what
it was designed to measure. A valid test produces dependable data for making
educational decisions and reflects real learning outcomes.
5.
(b) What Is Validity of a Test? (10 Marks)
The validity of a test
describes its effectiveness in evaluating the intended learning
objectives. A valid test ensures that results reflect true understanding, not
irrelevant factors like test-taking skills or language difficulty.
5.
(c) Types of Validity (15 Marks)
- Content Validity
– Test covers all necessary curriculum areas.
- Construct Validity
– Measures abstract concepts like intelligence or attitude.
- Criterion-Related Validity – Correlates with external standards or results.
- Predictive Validity: Forecasts future performance.
- Concurrent Validity: Correlates with existing assessments.
- Face Validity
– Appears appropriate to users (students, teachers).
- Internal Validity
– Ensures the test results are free from bias or interference within the
design itself.
Here is a well-organized and
complete answer format for the remaining parts of PDE 705 and PDE
106: Educational Psychology I:
PDE
705: Measurement and Evaluation in Education (Continued)
5.
Additional Types of Validity
- Criterion-Related Validity: Examines how well a test correlates with an external
benchmark (e.g., academic performance or job success).
- Concurrent Validity: How well a test reflects current performance.
- Predictive Validity: How well a test predicts future outcomes.
- Face Validity:
Refers to the superficial appearance of the test—whether it looks like it
measures what it claims to.
- Convergent Validity:
Demonstrated when two assessments intended to measure the same construct
yield similar results.
6.
Short Notes on the Following (25 Marks)
(i) Intelligence Test (5 Marks):
A standardized tool used to assess an individual’s cognitive abilities, such as
reasoning, memory, and problem-solving. It helps determine IQ and identify
intellectual strengths or challenges.
(ii) Achievement Test (5 Marks):
Evaluates what a person has learned in a specific subject or area of
instruction. It measures knowledge gained, typically after formal education or
training.
(iii) Aptitude Test (5 Marks):
Measures an individual’s potential to succeed in a given activity or field.
Unlike achievement tests, aptitude tests focus on capacity rather than past
learning.
(iv) Interest Test (5 Marks):
Assesses personal preferences and inclinations toward various activities or
career fields. Often used in career guidance and counseling.