Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Thursday, August 1, 2013

The Big Idea, or Focus, Cross-Referenced to Basic Rules of Item Writing

What comes before preparation is intention, which we previously discussed here. Still, the concept of the Big Idea bears further exploration.

Let's consider how we might approach this grade 4 standard from the CCSS, RL.4.2:
Determine a theme of a story, drama, or poem from details in the text; summarize the text.
This standard is passage-dependent; students read a story, poem, or play (or excerpts of the same) and then answer questions about what they read.

This standard requires two distinct subskills: determining a theme and summarizing text. 

Either may be assessed with multiple-choice, constructed-response, or technology-enhanced items, although I note that in an ideal world, we wouldn't use multiple-choice for summarizing, but would instead ask students to create the summary. Again in that ideal world, it's best if we provide the student with opportunities to demonstrate mastery of a particular skill by allowing the student to perform the skill; however, we often operate under constraints that exclude the ideal. That's okay.

After we've read all of our ancillary support materials and have thoroughly acquainted ourselves with the story, poem, or play (for less experienced item writers and for all item writers without a strong background in literary analysis, I suggest making an outline of and annotating the passage in order to avoid the trap of writing superficial and repetitive items), we determine the theme(s). There may be more than one. Out of fairness, choose the strongest theme that is most clearly supported and most thoroughly developed in the passage. The theme may be stated explicitly or may be implied by the characters' words and actions.

Here is our passage, "A Boy's Song" by James Hogg.

    Where the pools are bright and deep,
    Where the gray trout lies asleep,
    Up the river and o'er the lea,
    That's the way for Billy and me.

    Where the blackbird sings the latest,
    Where the hawthorn blooms the sweetest,
    Where the nestlings chirp and flee,
    That's the way for Billy and me.

    Where the mowers mow the cleanest,
    Where the hay lies thick and greenest,
    There to trace the homeward bee,
    That's the way for Billy and me.

    Where the hazel bank is steepest,
    Where the shadow falls the deepest,
    Where the clustering nuts fall free.
    That's the way for Billy and me.

    Why the boys should drive away,
    Little sweet maidens from the play,
    Or love to banter and fight so well,
    That's the thing I never could tell.

    But this I know, I love to play,
    Through the meadow, among the hay;
    Up the water and o'er the lea,
    That's the way for Billy and me.

We would probably use call-out boxes to define some of the vocabulary--"lea" and "nestling" stand out as words likely to interfere with student understanding.

If we're writing a multiple-choice item, the stem will look like this:
What is a theme of the poem?
Or we might identify the poem only by its title ("What is a main theme of 'A Boy's Song'?") if we plan to write another item about genre characteristics ("How does the reader know 'A Boy's Song' is a poem?").

Often at the lower grades, we use "theme" and "main idea" as synonyms; depending on curriculum, grade 4 students may not yet be familiar with the specific terms for narrative elements, and we don't want to erect unnecessary obstacles for those students, so we might write a stem that looks like this:
What is a main idea of the story?
I prefer "a" rather than "the" in order to allow for variety in literary interpretation; we'd follow the client's preference on this. In this case, a clear theme is the joy of spending time in nature. Now we have a stem and the correct response:

What is a theme of the poem?
A the joy of spending time in nature
B [TK]
C [TK]
D [TK]

Next we'd write three distractors (wrong answers). Each distractor should have a rationale--that is, each should embody a specific mistake or breakdown in comprehension or literary analysis that might hinder a student en route to determining the theme. The rule in item writing is that, given the evidence in the text, distractors must be "plausible but not possible." The distractors should be clearly wrong to the student who is able to "determine a theme...from details in the text."

Many clients require item writers to provide rationales or justifications for the wrong answer; I support this wholeheartedly as valuable practice for inexperienced item writers. Experienced item writers have rationales in their minds already, so it's just a matter of typing them.

When we write the distractors, we must stay focused on our Big Idea. In order to do that, we'd consider the breakdowns that occur when students attempt to identify a theme. In order to do that, we'd think about the process of making meaning from text. We read the poem and step back and come up with the overarching meaning: the joy of spending time in nature. Then we think about how a student might falter in putting the pieces of the poem together to see that big picture. A student might get stuck on a detail of the poem, and mistake that for a theme. A student might confuse theme and subject. A student might focus too narrowly.

Next up: constructing plausible but not possible distractors.

What I'm reading: The Reivers by Faulkner and Imaginings of Sand by Andre Brinks.




Tuesday, October 9, 2012

File Under: The Law of Unintended Consequences, Cross-Referenced to Undesirable Outcomes

From the National Bureau of Economic Research, hat tip to Inside Higher Ed, an indication that overtesting is a no bueno.

Ian Fillmore and Devin G. Pope of the University of Chicago studied student performance on the AP exam and found:
. . . strong evidence that a shorter amount of time between exams is associated with lower scores, particularly on the second exam. Our estimates suggest that students who take exams with 10 days of separation are 8% more likely to pass both exams than students who take the same two exams with only 1 day of separation.
This is of particular interest to me for a variety of reasons. Since the passage of NCLB, testing in grades 2-12 seems to occur at an astonishing frequency. Not only are there state tests in ELA and math and, in some grades, social studies and science, but there are usually some kind of interim (benchmark, call them what you will) district tests administered once (or more) per quarter in both ELA and math, along with the classroom teacher's tests and quizzes in every content area, and then there are other supplementary tests administered in programs such as Accelerated Reader (please don't consider this mention as an endorsement, more on this later).

Testing is not instruction. It seems obvious, but it needs to be said. When kids are being tested, they're not learning.

If you asked why all the tests, teachers and district personnel would say that they need to test in order to find out if kids are learning. Which might be true if they weren't testing quite so much.

The more testing, the less instruction, the more homework. The burden for instruction is offloaded to the children. They're supposed to be teaching themselves. This, in spite of a growing body of research that tells us how ineffective homework is:

The results of national and international exams raise further doubts.  One of many examples is an analysis of 1994 and 1999 Trends in Mathematics and Science Study (TIMSS) data from 50 countries.  Researchers David Baker and Gerald Letendre were scarcely able to conceal their surprise when they published their results last year:  “Not only did we fail to find any positive relationships,” but “the overall correlations between national average student achievement and national averages in [amount of homework assigned] are all negative.

(No one likes hearing that about homework. A teacher I know tells me that when she assigns less homework, parents complain. They worry their kids aren't working hard enough. As a parent, I was often astounded by the amount of homework expected from my children. Clearly no teacher ever sat down and worked his or her way through the material, or the teacher would have discovered that the time on task was excessive.)

Not to mention the other obvious problems with such a scheme--I mean, have you ever launched some ambitious self-study program? To muster up the wherewithal is daunting enough for a grown-up of strong will, and yet, we expect this of a child who 1) lacks the body of knowledge and skills required for such self-study and 2) has yet to develop that kind of self-discipline.

What's sad is that the overtesting deprives kids of the joy of demonstrating what they've learned. When teaching is sound and kids are learning, they can't wait to show you what they know. That's when we know that the instruction is working.

References
Fillmore, Ian, and Devin G. Pope. "The Impact of Time Between Cognitive Tasks on Performance: Evidence from Advanced Placement Exams." NBER. National Bureau of Economic Research, Oct. 2012. Web. 09 Oct. 2012. 
Kohn, Alfie. "The Truth About Homework." The Truth About Homework. Education Week, 6 Sept. 2006. Web. 09 Oct. 2012. 
"The Impact of Time Between Tests | Inside Higher Ed." The Impact of Time Between Tests. Inside Higher Ed, 9 Oct. 2012. Web. 09 Oct. 2012. 

UPDATE: fixed a bad copy-cut-paste.





Saturday, April 28, 2012

Pass the Pineapple

This, from Jo Perry, the beginning of a discussion about the larger context for the sleeveless talking pineapple:

An American child could go to a public school run by Pearson, studying from books produced by Pearson, while his or her progress is evaluated by Pearson standardized tests. The only public participant in the show would be the taxpayer.
If all else fails, the kid could always drop out and try to get a diploma via the good old G.E.D. The General Educational Development test program used to be operated by the nonprofit American Council on Education, but last year the Council and Pearson announced that they were going into a partnership to redevelop the G.E.D. — a nationally used near-monopoly — as a profit-making enterprise.

I'm very interested in this conversation. I'll say upfront that although I find it disagreeable to point at problems without offering possible solutions, this one's got me baffled.

There are not-for-profits that publish curriculum and assessment materials. From what I've seen, many operate just as corporations do, but perhaps more cheerfully, said operations being subsidized by what I imagine are tax breaks that lend some comfort to the proceedings.

Twice I've been recruited by not-for-profit agencies that publish test materials. Nothing seemed any different than any corporation. During the come-work-with-us talk, the VP assured me that just because their agency was a not-for-profit, this did not mean they didn't make a profit. He told me they liked to think of themselves as a meritocracy, and then he wrote a salary figure on a piece of paper and slipped it across the table.

From what I've seen of public education--and I've spent a tremendous time in classrooms at every grade, in review meetings with teachers, administrators, and other education professionals, and in state DOE conference rooms--I can't say that the public sector manages anything better than businesses or not-for-profits do.

(The elephantine factor is one problem. The larger a system is, the more difficult its management.)

In the immortal words of Tolstoy, "Everyone thinks of changing the world, but no one thinks of changing himself."*

Every time I've emerged from a classroom or a conference room (or even a presentation at an industry conference) feeling optimistic, it's been because of one person. A person who cares and whose work and words show that she cares. (I use "she" out of habit, not to be exclusionary.) There are brilliant and dedicated teachers in our schools. There are brilliant and dedicated leaders in education. (Some of these work with the corporations, by the way--there are certain names that always reassure me even before I read the recommendations based on their research.) There are people working in the corporations who are deeply and sincerely dedicated to serving students in their work.

There are many who aren't.

My feeling is that whatever your work, if you're just in it for the paycheck, you're doing yourself, your employer, and the end-user a tremendous disservice. We humans need to find and serve a higher meaning.

It shows when we don't. It shows, whether we work behind the counter at Starbucks or with a bunch of tiny little savages kindergarteners in an elementary school, or in a partitioned cubicle in a big corporation.

* I hope you understand I don't mean Gail Collins when I say this. She is calling our attention to a matter worthy of discussion.

Wednesday, April 25, 2012

Start Where You Are

The first time my second daughter began to read To Kill a Mockingbird, she gave up within ten pages. She was in the fifth grade. The reading was so difficult that she got no pleasure from it. When she read it last year, in the 7th grade, she loved it.

When we started the homeschooling, my first daughter was reading The Great Gatsby and my second daughter was reading Rebecca. (Both were thrilled with their choices, but neither had much interest in the other's.) When they finished, I was thinking about what to suggest next. I wanted them to read the same book so our literature class would be more focused than it had been. I'd told a friend that my first daughter had loved a YA historical novel set at turn of the century, and my friend said why not Edith Wharton.

The plot of The Buccaneers seems perfect for 14-year-olds: a coterie of friends of differing temperaments and sensibilities poised at the brink of making life's big decisions.

But my daughters wanted to start reading right away, and we couldn't find The Buccaneers at any of the local bookstores, neither chain nor independent nor used. Not even our library had a copy. I'd loaned mine out and you know how that goes. We chose The Age of Innocence instead. 

Almost immediately, my second daughter said it was too hard. My first daughter agreed it was hard, but was willing to persevere. For two weeks, my second daughter lagged behind in her reading. Then, realizing her sister had left her in the dust, she buckled down to the task. This was two days ago.

As I was making dinner tonight, she showed me how few pages she had left to read (that would be three). In two days, she'd read more than 180 pages.
Me: What happened?
Second Daughter: I started liking it, and then I liked it so much I couldn't stop.
This happened to me with Moby Dick, though I was a much later bloomer. I'd tried to read it many times, from high school onward, but wasn't able to get past the first chapter (which is very unusual for me, I hate quitting a book, it just feels wrong) until I was nearly 30 and in grad school. Why? I have no idea. Maybe I was immature. Maybe it is simply that I would have loved it if I had persisted.

These matters sound little, but they are the matters that make up reading. What do we do when the text is just too difficult? How can we tell when the difficulty may be overcome once the reader is engaged, or whether the student needs to develop the muscles for the heavy lifting? If the latter, what's the best way to nudge the student into gaining skills and yet not push so hard that the student becomes discouraged?

I've talked previously about my remedial community college students, how some were surpassed by 4th graders when it came to writing skills. Ditto reading. It was a challenge. 

I tried to go at the problem in different ways. We read a lot in class. I assigned an anthology of short short fiction (Sudden Fiction) which they liked and actually did read. I often brought in copies of articles and essays from newspapers and magazines on topics that I thought they might like: "The Ways We Lie" by Stephanie Ericsson, "On Dumpster Diving" by Lars Eighner, "Starting Over with God" by Douglas Coupland, "Bet with America" by William "Upski" Wimsatt. I brought in stories by great writers who were also my friends: "Close Calls" by Josh Schneyer, "New Pants" by Jervey Tervalon, "Someone's Got Cold Feet" by Kia Penso.

Most of my students worked hard. Much of the reading was neither easy nor natural to them; Carol Jago talks about how we need to talk with students about working at reading and teach them how to persist in the face of polysyllabic words and complex syntax. Nor did we begin with what was most difficult--we started where they were.

And how should this relate to assessment? A friend (who's been in the business long enough and at enough different companies to see trends whisk in and fade away) and I were talking today about rigor and the Common Core Standards and how the standards require what many students simply aren't capable of. Yet. I certainly don't mean never. I just mean their skills need to get stronger. We wonder how states are going to address this problem.

We can't develop sophisticated rigorous tests that all but the top ten percent will fail. We can't develop simple less demanding tests that all but the lowest ten percent will pass. 

And yet we must expect more from our students in order that they develop the skills they'll need as adults, one of those skills simply being that of keeping at it even when it is hard to do.




Friday, April 13, 2012

What Hath Been, What Will Be

There's nothing new under the sun.


AI scoring is always lurking at the edges of assessment talk. Because scoring depends on human labor, and is therefore expensive. Wouldn't it be great to automate scoring, eliminate human workers, and save a ton of money?


Erik Robelen at Curriculum Matters brings up the not surprising results of a study indicating that AI scoring may be as valid and reliable as traditional hand-scoring.


In traditional hand-scoring, a person reads an essay (or other written response), evaluates it against a rubric and other criteria (anchor papers, range-finders), and assigns a score. In AI scoring, the essay is automatically graded by a software program that uses some kind of formula (or combination of formulae) to assign the score.


Before this year, I'd always pitched my tent in a clearing in the traditional scoring camp. But that is in the best of all possible worlds, in the immortal words of Voltaire. When the rubric is fair and sound and based on observable, measurable traits; the anchors and range-finders are solid; and the hand-scorers diligent. Because don't we all have this innate aversion to the impersonal coldness of receiving a grade from a non-sentient program? A program that is not even capable of doing the thing that it is grading us on?


And yet, an experience with one of my daughters' teachers last semester made me rethink AI, at least for classroom use, at least for teachers who lack education, training, and experience in assessment. (Which so many teachers do. Assessment, though it is a big part of education, just is not adequately addressed in teacher credential programs.)


My daughters' teacher applied (or failed to apply, as she marked a project down for a trait that didn't even appear on this rubric) a rubric that broke two of the biggest rules in evaluating student performance. One was that descriptions of performance at different score point levels were exactly the same; another was that the language was completely subjective, the rubric didn't include any observations of what could be measured.


Here's a bad rubric:
Score point 4--Awesome, excellent work, student does a great job.
Score point 3--Still really great, not quite excellent.
Score point 2--Um, kind of bad, actually.
Score point 1--Weren't you listening to anything I said all semester?
Score point 0--Now you're just trying to fail. Mission accomplished, pal.


When I asked the teacher about the rubric, she defended it by saying she'd been using it for 15 years. I tried to explain that a history of use without data was no guarantee that the rubric was sound, but I'm not sure she ever understood how unfair--how invalid and unreliable--her grading system has been for 15 years. No one else ever complained. No one else knew enough to complain.


In such a case, bring on the AI. For the good of the student.


More here if you want to go on.










Monday, April 2, 2012

Send in the Clowns

This story, about the list of topics banned from New York state tests, was the circus show last week:

In a bizarre case of political correctness run wild, educrats have banned references to “dinosaurs,” “birthdays,” “Halloween” and dozens of other topics on city-issued tests.
I couldn't help commenting that it's like complaining about a fleabite when you're getting attacked by lions. One doesn't have to look far to find much more serious problems in public education.


And if it is a problem, it's one a of risk-management (as more business-minded folks would say).


Dinosaurs are banned because conservative religious groups protest any content with a whiff of a hint of a suggestion that evolution may have occurred on Earth. Halloween is banned because conservative religious groups protest references to a holiday having to do with the supernatural (ghosts, demons). Birthdays are banned because the celebrating of birthdays is prohibited by some religious groups, the same groups that would protest if tests contained references to birthdays. Danged if ye do, danged if ye don't.


In the early 1990s, the California Learning Assessment System crash and burn cost the state millions of dollars. The cause was public controversy about the assessment content.


I find all of this silly, just as I think it's nonsensical that McDonald's coffee lids have warning labels that the beverage is hot and may cause burns. But can you blame them?


All this blustering about the effect when we should be investigating the cause. But this isn't really intended to be news; it's just entertainment, as you can see from the readers who enjoy getting their rile on in the comments.









Wednesday, February 29, 2012

Rush to Judgment?

Not such great news from Paul Fain at Inside Higher Ed:
Large numbers of community college students are being placed into remedial courses they don’t need, according to new studies that questions the value of the two primary standardized tests two-year colleges use to place students: the COMPASS and the ACCUPLACER.
I find this news surprising. I taught both remedial and first-year English at two different community colleges (as an adjunct, like Professor X, who was called "an academic hit man" by the NYT, though I don't endorse nor share all of his opinions and "lemony plaintiveness," which you can read about here).

My teaching career was short-lived, but I loved teaching at community colleges. I loved most of my students. The young ones made me laugh. The older ones worked hard. (Some of the young ones worked hard, too--but the older ones were clearly on a dedicated mission to improve their lives, and having worked many years at jobs beneath their abilities, had identified education as the highway to heaven a better life.) They all of them sometimes surprised me with their storytelling--in a good way, once they figured out they could be themselves in their writing. In fact, I often wish I could have the opportunity and the financial independence (teaching jobs in this coastal area are as highly sought after as they are poorly paid, it's such a privilege to live near the beach) to allow me to return to the classroom.

However, the students in my remedial classes definitely belonged there. Their need for remediation was so great that at first, I wasn't quite sure what to do with them. Truly, I had seen better writing from fourth-graders--and I don't mean fourth-grade prodigies, I just mean regular little fourth-graders. My students in the remedial classes could not write a coherent sentence--even a simple declarative one--and so that's where we started. By the end of the semester, many would be writing decent paragraphs. Not all. Not even most. I don't know if they were ready to go on to college-level classes in every respect, but even the ones who still needed improvement were more ready than they had been. But a lot of them had dropped out by then, too. That's the unfortunate nature of the beast.

Maybe 25% of the first-year English students were second-language learners. These were the ones that troubled me most. (Except that one really smart girl who had to bring her baby to class because her childcare arrangements kept falling apart. I worried about her a lot, too.) They didn't understand me when I talked, so during class, they would pull out calculators and work on assignments from other classes until the weight of peer pressure (I would stop talking and look at them and the other students would fidget and grumble) bowed them into submission, and they would sit and look at the floor praying for release from what must have been unbearable tedium.

I don't know how the second-language learners were able to pass the placement tests, but pass they did. Their research papers were often bought from those sleazy Internet sites. It didn't take much detective work to figure that out; I always had my students write a lot in class.

This was a long time ago, though, at the very beginning of the commencement of my work in assessment (so I had no idea about and even less interest in the placement tests) and all is shrouded in mystery. Anyone who's ever taught the basics at a community college will probably agree with the necessity of some kind of placement tests. Grades are an insufficient and often wildly inaccurate measure; some of my remedial students reported achieving As and Bs all through high school, which achievement must have been the result of their brute charm, rather than any mastery of academic content.

According to the author of the study, "students who ignored a remedial placement and instead enrolled directly in a college- level class had slightly lower success rates than those who placed directly into college- level, but substantially higher success rates than those who complied with their remedial placement, because relatively few students who entered remediation ever even attempted the college-level course."

I'm going to spend more time with this study. Maybe there is the potential for flawed logic in the leap to "this raises questions not only about the effectiveness of remedial instruction, but also about the entire process by which students are assigned to remediation."

Sure, I'm open to that possibility. But there are other possibilities that may be overlooked in the rush to indict. What if the students who ignore the remediation recommendation are the self-selected most highly motivated do-or-die students? What if the students who follow the remediation recommendation are the ones who don't have the time, interest, inclination, motivation to slog through years of night classes while working full-time?

Just something to think about. I have no dog in this fight. If the tests do not do what they are supposed to do, by all means, let's use a measure that is accurate and fair to students.

Monday, February 20, 2012

File Under: All Roads Lead to Rome, Cross-Referenced to Mandatory Reading

The book every K-12 content developer--assessment and curriculum--should read is Tested: One American School Struggles to Make the Grade, which was recommended to me by a colleague and likeminded comrade in quality assessment content development, Carmen, a senior level genius expert at Anonymous Testing Company.


To say that the tests administered yearly at grades 3-8 (inclusive) are high stakes cannot possibly begin to convey what this means for students, teachers, and school administrators and how NCLB has transformed the educational landscape. The flames are licking at their feet every minute of every day.


And even when we who are informed are talking about this, and about how awful it is that teachers must teach to the tests and that the assessments are driving the curricula--it's one thing to talk about it, and it's another thing to experience it.  If you're not a student, teacher, principal, or parent, this book is the closest you can get to the fire.


Like the children in Tested, my daughters' skills and knowledge are assessed so frequently at school (and what is taught is often so narrowly focused--the algebra teacher actually labels each homework assignment with the assessable standard and tells parents that she does this so she will know whether students will answer those questions correctly on the state test) that I'm shocked by how little genuine instruction they actually receive and so I supplement their classroom instruction by offering my own reading, writing, social studies, and science lectures. Which may bore them nigh unto death, who knows, but I refuse to send them out into the world as little ignorami ignoramuses. (For math homework help, we turn to my friend and cohort and math content area genius expert Carrie Frech, who works at a major testing organization).


My daughters came home last week emitting little puffs of indignation over the latest district benchmark assessment. (They swiped it and brought it home to show me.) When I read it, I was horrified. It was a passage-dependent writing prompt. I didn't see a rubric, God knows what horrors hide behind that curtain, but I assume the responses were scored for reading and writing.


The story, a tedious adaptation of a folktale, was poorly written and at the fourth grade level (this, for an eighth grade gifted and talented program; my daughters are currently reading The Great Gatsby and Rebecca for their next book reports, and yet they're being assessed with text suitable for fourth grade?). What was there was was presented at an extremely literal level of understanding. Nothing in the story allowed for any genuine analysis of narrative elements nor interpretation of literary devices, and yet the writing prompt required the students to do just that. I don't know how they could. You can't make a pie out of one apple. The multiple-choice section (developed by a company relatively new to the game for whom I'd done some work a few years ago and by whose lack of understanding of test development at that time had shocked me) was no better. The girls told me that there was one question that was so nonsensical that, districtwide, the teachers' form of protest was simply to give all of their students the answer. Does anyone see any value whatsoever in the use of such an assessment tool?


When I did some work for this company a few years ago, I observed their inexperience with and lack of knowledge about assessment. Maybe things have changed since then, I don't know. What I do know is that this company--the same one that has little assessment background--doesn't perform any field testing of the test content and there is no data whatsoever to indicate that we can make any kind of accurate inferences about what students know and can do based on such poorly constructed assessments. But this company does a good job of selling, and the districts buy the dream fantasy idea that the products will 1) give teachers information about their students that will help them get students ready for the state test and 2) predict how students will perform on the state test. Neither claim is possible, particularly with assessments that violate the most basic quality standards.


All roads lead to Rome; it all comes back to the Quality Manifesto.


(And it must be said that certainly there is room for quality improvements in the classroom as well, and there was room even before NCLB. What passes for instruction in some classrooms horrifies me just as much as any quality train wreck I see in the assessment world. Two teachers at my daughters' school ROUTINELY play audio of the textbooks instead of teaching--this, in science, which I think we can all agree requires hands-on instruction; it's still bad in reading, but not quite so bad). One of these teachers also ROUTINELY spends twenty minutes or so of the fifty minute class period discussing her personal life to the captive audience of eighth-graders--they know all about her kids, her husband, her political views, her hobbies, her extended family, and her domestic habits. So do I. Unless someone is a friend, loved one, or crazy person celebrity, there's probably not much in her life you want to hear about for twenty minutes straight, and yet that is what this teacher subjects her students to instead of teaching them.)


UPDATE: Identified my book-readin' comrade by name. Thanks, Carmen!
UPDATE: Added a link.
UPDATE THE THIRD: Removed a link, anonymized an identity.