Cyborg students

Mechanical aids

When we test students today, what do we test?

Our tutor had occasion recently to pull out and work with a calculator from several years ago, when he was a classroom teacher.  Something to his surprise, it had a number of features not found in the machines in the hands of his students today.  That seemed to go against what is almost a law of nature, that calculators (and cameras, and almost all digital devices) will always add capabilities, no matter how few people actually use them.  A little research showed that the particular machine had been barred from use in the standardized college-admissions tests because it was too capable, and in order to qualify later models left off some things.

But that only brings up the question of what level of capability should be allowed.  That in turn leads to the bigger question: what are we trying to test, when we turn loose our students on the latest exam?  We hardly want to go back to the days of laborious calculations by hand of adding, subtracting, multiplying and dividing numbers.  There were algorithms for finding square roots with pencil and paper, not to mention many a tedious bit of work with the log table.  If we can come up with the number by punching a few keys, learning is the better for it (always assuming the student has absorbed what’s actually happening, by no means a given).

The current SAT (a distant descendant of the ones we took, long ago) has no no-calculator section, and indeed the student is assumed to have one that can graph any function he or she can type in.  Basic graphing, then, is not being tested.  On a much larger scale, a university math professor we know said during the COVID lockdown that he considered his students cyborgs: they would be able to search the internet for answers, contact their friends, even use online services that solve problems for you.  He was testing not what his students could do on their own, but what resources they could find and use.  This shifts the focus of the course from mathematics to details of various bits of software, something we’ve noted before.  It is perhaps a better measure of how the student will do out in the working world.  But it says less about his or her grasp of mathematics, and is rather dependent on the ephemeral world of software.

Another professor we know is facing a bigger challenge, and one we have no idea how to address.  He teaches English, which involves literary analysis and essay-writing.  The advent of AI programs, and especially their proliferation (so that we’re hard put to avoid them), makes it enormously more difficult for him to spot plagiarism.  There are plagiarism-spotting programs, but it’s a software arms race, and the teacher is out of the loop.  Left off to one side are things like the ideas and expression of the author, and how the student is to put together what ideas he or she may have.

What do we want our students to learn, and how do we tell whether they’ve done it?

Share Button