Tuesday, January 26, 2010

What's Actually Happening in a Tutoring Environment?

So, given that we're wanting to simulate human one-on-one tutoring, or get as close as we can to it to achieve the 2-sigma effect, what is it about the tutoring experience, exactly, that we want to replicate?

So far, the Chi article has been the most compelling for me.
Chi, Michelene et al (2001). Learning from Human Tutoring. Cognitive Science 25, 471-533 (Here's a link to the PDF)
This study used a control group with no tutoring, a group receiving normal tutoring, and a group that received "suppressed tutoring" in which the tutors did not provide information and answers but did still provide an interactive environment. The students who received suppressed tutoring still progressed just as well as those who received normal tutoring.

So it's not the tutor as a repository of knowledge that makes a difference. It appears, on the surface at least, that it doesn't even matter how much the tutor knows. A student can still excel when paired with a tutor who doesn't provide answers but does provide questions - provides an interactive platform on which students can construct their own knowledge. (Did I just say "construct their own knowledge?" Dangit. That's one of those cliché phrases I promised never to say.)

I found two other articles that report on an AI tool called AutoTutor which attempts to replicate actual human tutoring.
Graesser, A., Wiemer-Hastings, K., Wiemer-Hastings, P., Kreuz, R., & the Tutoring Research Group. (2000). AutoTutor: A simulation of a human tutor. Journal of Cognitive Systems Research, 1, 35-51.

Graesser, A., VanLehn, K., Rosé, CP., Jordan, PW., Harter, D. (2001). Intelligent tutoring systems with conversational dialogue. AI Magazine, 22(4), 39-52. (Here's a link to the PDF)
The articles don't report any sort of testing on the tool and, quite frankly, I remain rather dubious after reading them, but they do bring up one interesting point that goes along with this train of thought.

"We discussed three projects that have several similarities. AUTOTUTOR, ATLAS, and WHY2 all endorse the idea that students learn best if they construct knowledge themselves. Thus, their dialogues try to elicit knowledge from the student by asking leading questions. They only tell the student the knowledge as a last resort."

While I don't know that AutoTutor really does a very good job of creating an opportunity for students to construct knowledge themselves, there is still the assumption that what's really going on here doesn't have as much to do with the tutor as it does with the student. The reason students flourish in a one-on-one environment is because of the student half of the one-. The student needs to be questioning, interacting, involved. So does it matter who's on the other side of the table?

Is this just, then, a pedagogy question? I'm increasing tempted to think it is.

Thursday, January 14, 2010

The Basics of RSS

Another resource for my 286 students - What is RSS and how do I use it?


Blogging Through History

Here's a cool little video I found to share with my 286 class. Blogging may be new to you, but is the concept really that revolutionary?


Monday, January 11, 2010

The 2 Sigma Problem

This is the first of a series of weekly comments I'm going to be adding as part of my participation in a class this semester that is exploring Benjamin Bloom's 2 Sigma Problem. Here is my initial explanation and reaction to this concept and where I hope it takes me.

Bloom, B. S. (1984). The 2 sigma problem: the search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13, 4 –16. online pdf

Bloom, of taxonomy fame, first wrote about a discrepancy discovered between achievement levels of students in traditional classrooms, with a teacher-to-student ratio of about 1 to 30, and students tutored one-on-one. The fact that a discrepancy would exist between these two situations is rather obvious, but the measured discrepancy between the average student in a one-on-one situation (so, not just the best and brightest) and the mean of the students in a traditional classroom is phenomenally large: two sigma, that is, two standard deviations above the mean. How much of a difference is that? Well, two standard deviations in a normal distribution that we're familiar with, say, IQ scores, means that you would have an IQ of 130 and a percentile score of 97.7 - higher than 97.7% of the population. A difference this substantial in test scores (and it was test scores that Bloom measured, though he did so over and over again, in various disciplines, with similar results) is rather remarkable. This means that even the poorest-performing student in a one-on-one tutoring situation is still scoring somewhere around the mean of the students in the traditional classroom.

This is all well and good in a world where we can all have our own personal tutors. The two sigma statistic sounded a lot like good supporting evidence for Rousseau, who's been saying "I told you so" for 250 years. The question that remains, though, is what do we do to make this work in a situation where a personal tutor for every student is totally intractable?

Truth be told, when I first heard about this problem my first instinct was "Funny. Each child comes pre-packaged with, not one, but two of its own personal parents." But maybe that sort of an angle is too big for our current socio-cultural milieu. There are certainly a lot of things that would have to change in some very drastic ways before many of our families would be able to educate their own children. I am going to keep that thought on my periphery, though, as I continue to dive into this question.

Bloom addresses the quest to best simulate the two sigma effect in this 1984 article. He reports the results of a lot of extensive testing that has gone into every possible combination of factors he could identify that might get closer to the 2-sigma level while remaining in the economically feasible 30-to-1 classroom. Mastery Learning on its own has a 1-sigma effect, which I think is rather significant. Many other variables are simple pedagogical improvements and it looks like there is an additive effect when some are combined, and some of the researchers are showing results as high as 1.6, 1.7 and even 2 sigma, and this still in a classroom with 30 students and one teacher.

Variables having to do with home environment and peer group are also addressed, and this goes back to the initial thought that I had about the potential for parents to greatly make up a lot of the difference. Bloom concludes that, while effective, it is difficult and costly to make change in the home environment - it requires parent training workshops and the like. I think this is something we need not to forget, however. Even if we can't institute sweeping family reform from the school or political end of things, we can institute it in our own families, friends, and communities. And this may be a good place to try all we can to translate this research to families. Now that we have the internet especially, what could we be providing in terms of free, online resources to parents? We have a lot of potential here.

The next question, though, is technology, and this is where the literature after Bloom's 1984 article generally tends to take us.

Mott, J. and Wiley, D. (2009). Open for Learning: The CMS and the Open Learning Network . In Education, 15(2). full text online

For example, Mott and Wiley have addressed the two-sigma problem 25 years later in a context that Bloom probably never foresaw. He spoke of textbooks as educational technology; he probably wasn't imagining online course management systems.

There has been a lot of energy, work and funding poured into online course management systems and other computer-based educational technologies in the years since the personal computer first became available and affordable in an educational context. But even after years of work and billions of dollars, we have yet to see a transformational effect in the classroom. I am currently writing a lit review article on this very subject in my work at the CTL - why do revolutionary software tools behave well sometimes and, well, less than revolutionary at other times? As Mott and Wiley suggest, it's because we're using new-fangled tools in a way that holds back student learning. CMS tools, as well as other technological marvels (*cough*PowerPoint*cough*) are being used largely for administrative efficiency. And nothing in Bloom's original research shows anything about administrative efficiency being the key to enhanced learning.

The principles of teaching and learning are where we need to be focusing our efforts. What do we learn from tutoring itself that can shape our approach? Bloom's original research itself shows that it's learner-centered instruction and mastery learning (an approach very antithetical to competitive, letter-grade based motivation) combined with the fostering of higher order thinking skills that shows effects closest to those of one-on-one tutoring.

Chi, M.T.H., Siler, S.A., Jeong, H., Yamauchi, T. (2001). Learning from human tutoring. Cognitive Science, 25(4), 471. online pdf

This article addresses similar issues. What is it about tutoring that enhances learning? Chi, et.al., suggest that it may not even be what the tutor says or teaches! Their results indicate that it is the mere fact that the students have an interactive learning environment that creates the 2-sigma effect.

Can we build a computer tool that perfectly simulates the human mind and creates a robot replica of a wise tutor for our little Émile? Probably not. Not to say we haven't been trying. Can we create powerful technological tools with streamlined, automated efficiency to run our schools and universities? You bet we can. Can we use these capabilities, especially in an age of Web 2.0 and an expanded worldwide network of easily accessible computer-facilitated interactive tools that don't require advanced software training, to harness the power that interactivity offers? One would certainly hope so.