"Executive" Summary:
|
Additional resources:
|
Source:This page is derived from the 8-pg summary at CTLT hard copy resources. |
Select topics for your questions by reviewing the readings, activities and objectives. Identify the important concepts to be tested. Well written learning goals do this for you automatically. Guidelines for selecting topics are:
When beginning to construct multiple-choice question you should pose the question (the "stem") first. A well-constructed stem is a stand-alone question that can be answered without examining the options. Some guidelines are:
The wording of the stem and the verbs it contains, determines the overall cognitive level of the question. It can be useful to use Bloom's Taxonomy to help you prepare the stems to test concepts at the appropriate level. But don't ignore "low level" learning goals. Treat Bloom's taxonomy as a pyramid or sequence of foundation to expert behaviour. Writing multiple-choice questions at the higher Bloom’s is possible, but can be very difficult and time-consuming.
Create the options (both correct and incorrect answers) after writing the stem. Options should focus on testing the understanding of important concepts and testing common misconceptions. The challenge of creating plausible distractors is the most difficult aspect of creating MCQs. Guidelines for construction options:
When developing options it is useful to map them on a continuum from correct to incorrect in order to visualize the “correctness” of a given option.
![]() |
If all distractors except the correct one are clustered around the incorrect end of the spectrum then the question will be unambiguous.
|
![]() |
If options cluster at the correct end of the continuum the stem should include words like: which is MOST significant? What is MOST important? What would be the BEST solution? These kinds of questions require finer discrimination by the students. They can also lead to problems…..use with caution, and validate if possible. |
Multiple-Choice Questions have a reputation for only testing lower level skills like knowledge and recall. However it is possible to write questions targeting a higher “Blooms” level. Here is one example:
In your argument, you are citing a number of cases from different courts. This is the first time you cite any of these cases. What is the most accurate citation sentence (use your citation manual)? |
|
Here students are asked to select the citation that is most accurate. All citations have errors and the students are really being asked to “hypothesize” which errors will have the greatest impact on the citations effectiveness. This question is testing at a very high “Blooms” level. Example due to Sophie Sparrow and Margaret McCabe from the Pierce Law Center in Concord, New Hampshire.
Ideally, difficult questions should be "validated". That means tested with students to ensure they read and think through the question in the same way as you expected them to when it was designed. For example, have a past student or graduate student "think aloud" while you watch them answer the question. Item analysis can also help identify questions that are not performing as you expected.
A good example of how validating with students reveals important information about how students perceive questions being posed is in Ding, Reay, Lee, and Bao, Are we asking the right questions? Validating clicker question sequences by student interviews, Am. J. Phys. Vol. 77 No.7, July 2009. Here is a useful paragraph from their "Summary and Discussion" section:
"Many validity issues missed by physics {geoscience, chemistry, whatever ...} experts were revealed by student interviews. Why do experts miss these issues? For them, correctly bringing in relevant information is an automatic task, much as driving a vehicle is for an experienced driver. When answering these questions, the experts optimize their attention allocation, ignoring irrelevant information and filling in missing information. But students don’t possess domain knowledge with the same breadth and depth and their knowledge often is not hierarchically structured. Consequently, students will sometimes perceive the questions differently than experts".
Item Analysis is a highly recommended process that analyzes each item in terms of student response patterns. It helps you assess the test's validity, check for possible biases, and evaluate student strengths or weaknesses. The Vista course management system can produce tables for item analysis using "reporting of assessments". See Vista documentation or ask a colleague or Vista support person. For an introduction to Item Analysis see the last two pages of the 8-pg Introduction the Multiple Choice Question Writing (MS Word *.doc format) mentioned above.