Thursday, June 08, 2017

 

Assessing Tutoring


For the sake of argument, let's say that you're trying to assess whether (or the degree to which) a tutoring center helps with student success.  (I’m not trying to single out tutoring; the same argument could apply to advising, or the library, or to any number of things.  I’ll just use tutoring here to make the question clear.)

You could look at usage figures to start.  Presumably, if usage is minimal, you can infer that not much is happening.  If usage is substantial and sustained, you can infer that at least some students see value in it.

But that's pretty indirect.  By that logic, a popular movie must be a good one, and a little-seen movie must be terrible.  We know that's not true.  

We can, and do, administer student surveys to see if they’re happy with their tutoring.  That’s useful, but still indirect if the goal is improved academic performance.

Assuming you had pretty good data systems, you could compare the grades (or completion rates) of students who got tutoring with the grades of students who didn't.  But that, too, is a noisy indicator.  Say that students who received tutoring got an average of a half-grade higher than those who didn't.  Is that because of the tutoring, or is that because the students with drive and initiative are more likely to bother to show up for tutoring?  If it's the latter, then you're confusing correlation with causation.   Alternately, it may be that the students on the cusp of failing are likely to seek tutoring, while those who are comfortably acing the course don't bother.  Average the two effects together and you wind up with statistical mush.

That's the scientific term.

The question matters when it comes to resource allocation.  Assuming finite resources, dollars spent hiring more tutors are dollars not spent on hiring more faculty, or more advisors, or more financial aid counselors.   If tutoring helps on its own, then there's an argument for beefing it up.  If it's mostly just a way to identify the self-starters, then beefing it up wouldn't help much.  If anything, it might even water down its value as a sorting mechanism, to the extent that a sorting mechanism has value.

The question isn't unique to tutoring, of course.  It applies to all sorts of interventions.   We know that students who join campus clubs complete at higher rates than students who don't.  By itself, that could be because clubs offer the benefit of a sense of belonging and a group of friends, or it could be because successful students are more likely to join clubs.

Presumably, you could settle the question with control groups over time.  Take two largely similar groups of students, and ban one of them from tutoring.  Then measure outcomes.  But that raises some pretty obvious ethical questions for the ones who are paying the same tuition and are banned from tutoring.  I'd prefer not to try that method.

I'm guessing that Brookdale isn't the first college in history to face these questions.  For purposes of our own outcomes assessment, it would be nice if we could answer them not just globally, but locally: in other words, I'm less interested in the success payoff of tutoring generally than I am in the success payoff of ours specifically.  That means figuring out reasonably simple approaches to local data.


Wise and worldly readers, is there a reasonably straightforward way to separate correlation and causation on the local, campus level?  I assume that our tutoring center helps, but it would be nice if I had something resembling evidence.

Comments:
Our tutoring center tracks students who use its various services, and I am told that the data for larger classes that make substantial use of tutoring services (like algebra) are quite clear. Have you tried that, or are your objections purely theoretical? That result could certainly be due to a simple correlation with motivation, but that ambiguity could be resolved by case control methods. (Match with a student with a similar GPA or SAT score.) You don't have to do a long term study that would have to go through an IRB, but it would cost a fair bit of extra time for the IR folks.

Of course, the problem with doing a case control based on GPA is that students tend to be repeat users of the tutoring center. Their GPA going in could be a result of an average student using those services.

But who cares if it is "just" a correlation? The same is probably true for advising services. If motivated students use those services, that is a good thing. That is fewer hours they have to work if they would have had to pay someone for those same tutoring services. What you should focus on is getting the semi-motivated students motivated to study, and so on down the line. Create a culture of learning.

Finally, don't discount the value of employing tutors. They get to have the best kind of part-time job, one that further improves their academic skills, and they provide a role model and informal advice about succeeding in 2nd year classes or after graduation. And who knows? A tutor might decide to go into HS or college teaching. I got my start, and my best training, as an undergrad tutor.
 
I don't think that there is a good way to separate correlation from causation in most of these educational interventions, largely for the ethical reasons you cite. So were are stuck with anecdotes, plausible hypotheses, and probably-doesn't-hurt theories to use the correlations that are measurable to guide implementation.

More and more financial resources at University of California are being spent on support for marginal students, as the admissions criteria have moved to "local context", where the top 9% of any California public high school are UC-eligible, no matter how little the high school teaches. Whether this is a good use of resources is debatable (many of the students would be better served by the smaller classes of a community college, followed by transfer to UC), but UC gets a lot of political good will from having such a large fraction of its students be Pell grant recipients or first-in-family to go to college, and there is a lot to be said for the public good of all public colleges sharing the burden of correcting the damage done by some of the poorer public high schools.
 
Does your CC do midterm grades? If so, you could use midterm and final grades as a pre/post score and the tutoring data as your predictors of pre/post change in a regression model. That is, do students who did not use the TC during the first half of the semester and who received poor midterm grades, but who use the TC in the second half, have larger positive changes in their final grades than students in the same situation who do not use the TC at all? (Of course, this is an overly simplistic hypothesis, and all of that could be parameterized with a much finer grain, but you get the gist.)
 
I am an adjunct faculty member at a community college and teach college algebra. The math department gives students 1 bonus point on exams for each hour of algebra tutoring they receive, up to 3 points. The assumption is that students who get tutoring will perform better in algebra, so we should provide incentives for them to go to tutoring. This makes it even harder to determine cause and effect. Do students go for tutoring because it is helpful or because they want the bonus points? If students who go to tutoring do better than those who do not, is it because they have greater knowledge or because of the bonus points? I think students' grades should be based on performance, not whether or not they go to tutoring, but I am not going to rock the boat.
 
Your stat folks will tell you that if you have a reasonably rich dataset based on student info, you can generally get at these questions.

 
I suggest surveying the students about the use of tutoring services.
Did they use them?
If so - did tutoring effect their grade? effect their confidence in the subject?
If they didn't help, why - the information was too simplistic, wrong, out of touch with what was happening in class, too complicated, I couldn't connect with the tutors?
Why didn't they use them? (Wrong time, didn't know about them, clashed with other classes, other people told me they were no use, didn't have the time.)
etc
 
Without question our tutoring center helps our students get better grades. Students show up at the tutoring center, get out their laptops, and then get "help" from the tutor on their online homework. As homework counts for a non-trivial fraction of the grade, the students get better grades because of the "tutoring."
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?