A few days ago the College Board posted a new PSAT practice
test. This is the practice test for the
redesigned PSAT to be given in October 2015.
It is also a glimpse of what the redesigned SAT might look like.
There has been a great deal of speculation about the
redesigned test. A number of people
have opined that the new test would be an “ACT clone.” I, myself, have speculated that the new test
would be designed as more of a high school exit exam than a college entrance
exam. Quite a few pundits have pointed
out that the SAT was losing market share to the ACT and thought that the
redesign might be an effort to gain that share back.
Upon looking over the test, I am now prepared to make the
following statements:
The writing section IS barely distinguishable from the
writing portion of the ACT. Otherwise,
this test is in no way an ACT clone. On
the other hand, I do feel that this test represents a fundamental shift in
purpose.
There are a number of possible purposes for giving a
standardized exam. (This would be as
opposed to a teacher-made assessment.)
Here are a few:
- To document student learning (or lack thereof) of particular skills and concepts
- To distinguish among (or rank) students
- To drive the curriculum*
Prior to 2015, the PSAT’s fundamental purpose has been a
balance of the first two items. As the
National Merit Scholarship Qualifying Test it has been used to distinguish top
students from the pack, and it has documented whether or not students have a
grasp of particular skills and concepts.
It has NOT been used to drive the curriculum...until now.
In my opinion, several of the changes were specifically designed to have
an impact on the nature and content of classroom instruction.
One of the elements of Common Core language arts is a focus
on having students read critically. In
an effort to have the students respond to what the author actually said – as opposed to how they feel about what they think
the author said – students are being asked to point to pieces of the text as
“evidence.” In the PSAT critical reading
portion, students are asked to choose the best evidence for almost every
question. Now I’m not a language arts
teacher, and I don’t have any special critical reading or psychometric
expertise. However, it seems to me that
this is about the same as asking the same question twice. In other words, if you correctly answer the
original question, then the answer to the “evidence” follow up is trivial. If you missed the original question it would
be impossible. Will there be some kind
of scoring mechanism that uses this question-pairing to determine when the
correct answer was obtained by guessing?
Maybe. It is more likely an
attempt to make sure teachers require their students to give evidence in class.
Here is another example from the math section. Months ago, when the sample questions were
released, I noticed that some of the questions were much longer and more
involved. This represents a distinct
shift. Up until now I have told my top
students, “If you are more than 3 steps into an algebra process, you probably
missed something.” One of my main
criticisms of some of the test prep books out there has been that too many of
the math questions can ONLY be solved with the application of tedious algebraic
steps and are thus not representative of the real thing. However, this has changed with the PSAT
practice test. It’s interesting. If I’m a professor of mathematics or
engineering I’m interested in multiple different aspects of their math
skills. I want them to have a solid
grasp of the concepts, good number sense, AND I want them to be able to keep
track of what they are doing through a long problem that requires many steps
and sub-steps to solve.
Up until now, the major college entrance exams did a decent
job of testing the first two. (The SAT was better than the ACT in my opinion.)
They really haven’t attempted to do the third.
This is largely because a multiple choice or single final answer exam
format is a TERRIBLE way to assess that third skill. If the student’s answer is incorrect, you
don’t know if he really can’t negotiate the process or if he just made a silly
error in the middle. Graded homework and
teacher-made assessments where partial credit is given are much better means of
determining whether or not the student can handle a long problem. Both of those would be reflected in the
students’ grades. By trying to test it
in this format, you add no new information.
The only reason I can come up with to include problems like this is to
encourage math teachers to have students practice longer, more complex
problems.
When the College Board first announced that the SAT would be
re-designed, an admissions officer at a small, selective school wondered in an
online forum, “Will the rSAT do a better job of distinguishing among students
at the top end of the spectrum?” This
was what she was hoping for. Other users
of the forum predicted that it would not.
To do so would require a test with a wider standard deviation, and a
large contingent of the (math-challenged) public believes that a wide standard
deviation is inherently “unfair.” In
fact, the speculation was that the new test would do a worse job of
highlighting differences among students.
Given that the redesigned test appears to have abandoned that goal
altogether in favor of driving the curriculum, I’d say that admissions officers
at selective schools will be plumb out of luck.
The practice PSAT is posted here: https://collegereadiness.collegeboard.org/sat-suite-assessments/practice/practice-tests
* You may be wondering what it means to have a test “drive
the curriculum.” Let’s look at an
extreme example. Throughout the 1990’s
and early 2000’s North Carolina had a state writing assessment. In fourth, eighth and tenth grades students
had to write a timed essay and send it off to be scored by specially trained
“experts.” After several years of dismal
results, some statisticians called “foul.”
They pointed out that the scoring methods were severely flawed, and thus
that the scores were ultimately meaningless.
(By the way, the scoring methods used to score the ACT and SAT essays
have some of the same issues.) The
state’s surprising response? “Yes, we
know.” Students (and by extension their
teachers and parents) suffered through this test for YEARS. Why? Because
if there’s a writing test – even a flawed one - teachers will spend time
teaching writing. The average amount of
classroom time spent on writing instruction quadrupled. Prior to the test, some teachers had spent
ZERO time teaching writing.