Saturday, December 5, 2015

Vocabulary work for the reluctant reader

I've posted before on the fact that many of our students have small vocabularies and that this hampers their reading comprehension.  (You can read that post here.)  By far the best way to increase your vocabulary is to read a wide variety of materials - particularly books or articles at a challenging reading level.  I have also posted on the fact that it is difficult to get students to actually do this.

Vocabulary building needs to begin - well, from birth really.  However, college bound students should consciously work on their vocabularies beginning no later than middle school.  How can you get a reluctant reader to work on his or her vocabulary without reducing it to rote memorization?  Comic books can be a great method.  Calvin and Hobbes is a great comic series with some terrific vocabulary words in it.  It appeals to everyone age 10 and up.  The artist, Bill Watterson, has retired, but compilations of the strips are still in print.  You can also find them online.

I recently discovered another great vocabulary-building book series.  Stephan Pastis, who draws the nationally syndicated comic strip "Pearls Before Swine", has also written a series of books about an elementary school character, Timmy Failure.  The series is aimed at students in late elementary school, but I have enjoyed them, and I'm 53.  These chapter books are in the style made popular by the Diary of a Wimpy Kid series. "Pearls" doesn't tend to be filled with SAT-level words, so imagine my surprise and delight when I encountered the following in the first Timmy Failure book, Mistakes Were Made:  rigorously, depiction, alleviated, stipulations, assent, prudent.  And that was just the first 50 pages!  Each book in the series is entertaining - a quick, easy read.  Unlike Calvin and Hobbes, which some children would need to read with a dictionary, many of the words in the Timmy Failure books are defined by the characters as you go, although in both series you could get the basic gist of many words from their context.

These books make great holiday gifts!  

Monday, November 16, 2015

Study Materials Review: Kaplan New SAT Premier 2016

As soon as the College Board announced changes to the SAT, people began publishing study materials for it.  The more responsible ones waited until after the practice tests had been published.  These materials are now finding their way to book store shelves.  I recently examined Kaplan's offering.

As a general rule, I like Kaplan's stuff better than Barron's or Princeton Review so I was hopeful that this would be a viable option.  Like most of the stuff out there, it's better for some things than for others.  Here are some impressions:

First of all, this thing is HUGE.  1337 pages, PLUS a DVD, PLUS access to online materials.  The sticker price is also relatively large at $36.  (It's a lot less on Amazon.)  I haven't read the whole thing, and I haven't examined the DVD or the online materials, so keep that in mind.

About 1/3 of the book is math review.  They assume nothing about the student's preparation: they begin with PEDMAS.  In my opinion this is a waste of space.  A student who needs help with PEDMAS needs help from a human, but whatever.  I did appreciate that there are a wide variety of targeted exercises for each section.  This book is quite useful for the student who has seen the material in class, but may not have been exposed to all of the different ways he or she could be asked about the material.  For each section there is also an estimate of how likely the topic is to appear on the test. The sections on statistics are quite nice for the strong math student who just never covered those topics, and I like the fact that they included instructions on how to use the statistical features on the TI-84 graphing calculator.  That will be useful for a lot of students.

The second third of the book is reading and grammar.  I thought the passages were well-chosen, and they advised the student to mark up the passage and take notes in the margins.  However, they didn't go into enough detail about what the student should be marking.  Then they actively discouraged re-reading the text while answering the questions on the grounds that there wouldn't be enough time.  I really have done much to work with students on this test, but my impression is that only the slowest students should be pressed for time.  Maybe I'll change my mind this spring.  The practice questions didn't feel like SAT questions.  So far we only have four practice tests to go on, but still....  They were just "off."

The last third of the book consists of 2 practice tests and then answers and explanations for the tests and all of the practice questions.  I got disgusted and quit midway through the reading test.  The questions were just strange. I really haven't examined the grammar, yet.  I though the math questions were the right style, but I haven't decided if their mixture of questions and topics makes for a good practice test.  I thought the tests released by the college board were uneven, so it will take a few administrations of the real thing to really know.

Anyone wishing to start studying for the reading or grammar sections should order Erica Meltzer's books.   Kaplan's New SAT Premier 2016 might work well for the math while we are waiting for Mike McClanathan's new edition to be published.  Hopefully, the math portion of this tome will be published separately - and at a cheaper price.

Monday, August 31, 2015

Hello, Founding Documents!

When revisions to the SAT were first announced, there were several changes proposed for the reading section.  One of the changes was the elimination of the fill-in-the-blank vocabulary section that started off each reading section.  I thought that was an interesting choice at the time because, as unpopular as the vocab questions were, testing vocabulary is an easy way to assess reading comprehension.  Vocabulary knowledge is a reliable predictor of reading comprehension and vocab questions are a lot easier to write.  In addition, testing vocabulary, as opposed to passage-based testing, allows more questions to be asked in the time allotted, which should allow for finer differentiation among test-takers.

The reasoning offered at the time was that being familiar with "obscure" vocabulary did not predict success in college.  In the very same announcement, Daniel Coleman said that the reading passages would include "the founding documents."  At this point, the test-prep tutoring community responded, "Ummm.....have you actually READ the founding documents??"  Talk about obscure vocabulary.  I sat down and compiled a list of obscure or antiquated words from the Constitution ALONE that numbered in the hundreds.  Then the practice PSAT was posted without any founding documents in it, and I figured the College Board members had come to their senses.

Shortly after that, ETS posted four practice SAT tests.  I took the first one, and then I got busy.  REALLY busy.  It's been a crazy summer in test-prep land.  All the tutors I know have been swamped.

The first practice SAT had a pretty easy reading section.  Lovely.  There was plenty of time, and the most difficult section had a passage that had clearly been written for grownups but wasn't ridiculously challenging.  I didn't take tests 3 and 4 until this week, and HOLY CRAP!!  Hello, founding documents!

So there is a lot we could say here about the wisdom of including founding documents in some tests, but not others, including that fact that this would be a perfectly valid reason for avoiding the revised SAT.  However, if you are going to take it, you need to show up prepared to interpret writing that is 300 years old, obscure vocabulary and all.



Sunday, June 28, 2015

Use the summer for reading!

I tell my students that it is most time-consuming to raise their scores on the reading sections of the college entrance examinations because they need to read.  A LOT.  Then I tell them that summer is the perfect time to work on this.

This is my annual post to encourage reading.  I was inspired to get off my duff and write this year's edition by this article.

The article looks at what students are reading.  There is good news and bad news.  The good news:  reading is not dead.  Quite a few kids are reading and some of them read quite a bit.  The bad news:  too many kids read hardly anything, and when they do read, they choose books that are written at an easy reading level.

The article includes a plea for students to read the classics and includes an argument for a students reading list to include both fiction and non-fiction.  It points out that raising your reading level requires work - which is probably why kids are avoiding it - but that when students make the effort they enjoy the more difficult books.

Here's the thing.  Students who read regularly and who can comprehend material at a 11th grade level or higher will have a tremendous advantage on their college entrance exams, in college and later in life.  For a quick-and-dirty estimate of the reading level of a piece of writing, see this previous post.

Thursday, March 26, 2015

Impressions on the new practice PSAT

A few days ago the College Board posted a new PSAT practice test.  This is the practice test for the redesigned PSAT to be given in October 2015.  It is also a glimpse of what the redesigned SAT might look like. 

There has been a great deal of speculation about the redesigned test.   A number of people have opined that the new test would be an “ACT clone.”  I, myself, have speculated that the new test would be designed as more of a high school exit exam than a college entrance exam.  Quite a few pundits have pointed out that the SAT was losing market share to the ACT and thought that the redesign might be an effort to gain that share back.

Upon looking over the test, I am now prepared to make the following statements:

The writing section IS barely distinguishable from the writing portion of the ACT.  Otherwise, this test is in no way an ACT clone.  On the other hand, I do feel that this test represents a fundamental shift in purpose.

There are a number of possible purposes for giving a standardized exam.  (This would be as opposed to a teacher-made assessment.)  Here are a few:

  • To document student learning (or lack thereof) of particular skills and concepts
  • To distinguish among (or rank) students
  • To drive the curriculum*


Prior to 2015, the PSAT’s fundamental purpose has been a balance of the first two items.  As the National Merit Scholarship Qualifying Test it has been used to distinguish top students from the pack, and it has documented whether or not students have a grasp of particular skills and concepts.  It has NOT been used to drive the curriculum...until now.  In my opinion, several of the changes were specifically designed to have an impact on the nature and content of classroom instruction.

One of the elements of Common Core language arts is a focus on having students read critically.  In an effort to have the students respond to what the author actually said – as opposed to how they feel about what they think the author said – students are being asked to point to pieces of the text as “evidence.”  In the PSAT critical reading portion, students are asked to choose the best evidence for almost every question.  Now I’m not a language arts teacher, and I don’t have any special critical reading or psychometric expertise.  However, it seems to me that this is about the same as asking the same question twice.  In other words, if you correctly answer the original question, then the answer to the “evidence” follow up is trivial.  If you missed the original question it would be impossible.  Will there be some kind of scoring mechanism that uses this question-pairing to determine when the correct answer was obtained by guessing?  Maybe.  It is more likely an attempt to make sure teachers require their students to give evidence in class.

Here is another example from the math section.  Months ago, when the sample questions were released, I noticed that some of the questions were much longer and more involved.  This represents a distinct shift.  Up until now I have told my top students, “If you are more than 3 steps into an algebra process, you probably missed something.”  One of my main criticisms of some of the test prep books out there has been that too many of the math questions can ONLY be solved with the application of tedious algebraic steps and are thus not representative of the real thing.  However, this has changed with the PSAT practice test.  It’s interesting.  If I’m a professor of mathematics or engineering I’m interested in multiple different aspects of their math skills.  I want them to have a solid grasp of the concepts, good number sense, AND I want them to be able to keep track of what they are doing through a long problem that requires many steps and sub-steps to solve.

Up until now, the major college entrance exams did a decent job of testing the first two. (The SAT was better than the ACT in my opinion.) They really haven’t attempted to do the third.  This is largely because a multiple choice or single final answer exam format is a TERRIBLE way to assess that third skill.  If the student’s answer is incorrect, you don’t know if he really can’t negotiate the process or if he just made a silly error in the middle.  Graded homework and teacher-made assessments where partial credit is given are much better means of determining whether or not the student can handle a long problem.  Both of those would be reflected in the students’ grades.  By trying to test it in this format, you add no new information.  The only reason I can come up with to include problems like this is to encourage math teachers to have students practice longer, more complex problems.

When the College Board first announced that the SAT would be re-designed, an admissions officer at a small, selective school wondered in an online forum, “Will the rSAT do a better job of distinguishing among students at the top end of the spectrum?”  This was what she was hoping for.  Other users of the forum predicted that it would not.  To do so would require a test with a wider standard deviation, and a large contingent of the (math-challenged) public believes that a wide standard deviation is inherently “unfair.”  In fact, the speculation was that the new test would do a worse job of highlighting differences among students.  Given that the redesigned test appears to have abandoned that goal altogether in favor of driving the curriculum, I’d say that admissions officers at selective schools will be plumb out of luck.



* You may be wondering what it means to have a test “drive the curriculum.”  Let’s look at an extreme example.  Throughout the 1990’s and early 2000’s North Carolina had a state writing assessment.  In fourth, eighth and tenth grades students had to write a timed essay and send it off to be scored by specially trained “experts.”  After several years of dismal results, some statisticians called “foul.”  They pointed out that the scoring methods were severely flawed, and thus that the scores were ultimately meaningless.  (By the way, the scoring methods used to score the ACT and SAT essays have some of the same issues.)  The state’s surprising response?  “Yes, we know.”  Students (and by extension their teachers and parents) suffered through this test for YEARS.  Why?  Because if there’s a writing test – even a flawed one - teachers will spend time teaching writing.  The average amount of classroom time spent on writing instruction quadrupled.  Prior to the test, some teachers had spent ZERO time teaching writing.


To pre-order the practice book for the redesigned SAT:

Sunday, March 15, 2015

Something to think about if your kid has received a college rejection letter

I was sitting in a metal folding chair in a hotel ballroom. The speaker had just said something that hit me so hard that I couldn’t listen to what she said next. I needed to stop and process that statement. I was thinking, “Holy crap!” and “Well, now that she’s said it, it seems so obvious. Why didn’t I figure this out before?”

The speaker was head of admissions at an elite private university. She was talking to a group of elite high school students and their parents, and what she said was this, “It is not my job to make the perfect admissions decision on your kid. No one is paying me to do that. It is my job to build the next freshman class.” She went on to point out something we already knew: The school could only accept a small fraction of the fully qualified applicants.

As high school students, or parents of high school students, or maybe just members of the public with an opinion, we tend to view the process from the outside in. And it’s like looking into a fish bowl; the view is distorted. We tend to walk around with an impression of the process, and we think that it works something like this: The school takes all of the applications and ranks them from first to last. Then it takes enough applications off the top of the list to fill the class, and boom! Done! And because we subconsciously realize that it would be impossible to rank the kids from first to last without considering numerical measures like GPA’s and test scores, we assume these must play a very large role in the process. If my kid doesn’t get in, that means he didn’t rank high enough on the list. The fact is, it doesn’t work that way. A lot of factors come into play. At the end of the admissions period, the incoming class needs to have all kinds of kids: Some who play on the football team and some who play in the band, some who join the Young Democrats and some who join the Young Republicans, some who build robots and some who work on the school newspaper. All of them need to have a shot at doing the course work.

Lots of factors come into play, and it is certain that the school did not make the exact right admissions decision for every single applicant. They may not have made the exact right decision on your kid. I’ve been there, and you have my sympathy. It doesn’t mean they’ve made a value judgment about your kid, and it doesn’t mean he wouldn’t have done well there. It just means when they selected the components of the class, they didn’t select your kid. They may have made a mistake. Or maybe not. Either way, the system isn’t broken. Because making the perfect decision on your kid wasn’t one of their goals in the first place.

Saturday, January 10, 2015

I'll say it again: Top SAT scores begin in the sandbox!

I've written about this topic before here:  Top SAT scores begin in the Sandbox.  However, it bears repeating:  When I work with high school students who are studying for their college entrance exams, I sometimes see evidence of what I have come to refer to as a Sandbox Deficit.  Children who don't spend enough time in unstructured play fail to develop the foundation for basic math, reading and science.  To compensate, they memorize steps, algorithms and strategies, but that only takes them so far.

Unfortunately, academics are getting pushed to earlier and earlier ages.  The good news is that more people are waking up to the problem.  Spread the news!  Share the studies!  And make time for children to play!

The most recent article I have run across regarding this subject can be found here:  http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/02/06/a-really-scary-headline-about-kindergarteners/

And here is what Matt Walsh had to say about the same headline referred to in the above article:
http://themattwalshblog.com/2014/02/10/your-5-year-old-failed-a-standardized-test-therefore-he-is-stupid-insane-and-doomed-to-a-life-of-failure/