I don't see why schools should be able to tell if a student is revising last minute. Why should they care? Pass the exam is all that's required. No revision method is perfect, and being dyslexic I know the way I work differs from how my friends work. Scary is what this is.
>I can easily imagine students rigging their reader with activity simulators.
If this had existed when I was in high school, I definitely would have spent hours writing scripts to automate interaction with the ebook, and more hours trying to reverse engineer their private API for phoning home with the 'engagement' data....before subsequently failing the test.
if a student is able to study all the content the night before and still get a decent grade compared to long-haul studiers then there's something seriously wrong with the coursework
The coursework has to be calibrated so that an average student can succeed. There will always be students who are smarter than average and can pick up the material much faster, or who have a personal interest in the material and have studied it on their own in the past (e.g., some kids actually like math or science or computers and study these topics extensively outside of school).
Outside of K12 the kids might not be kids. Don't try to lecture me on what happened in 1989-1992 wrt the soviet union, I was there (well, glued to American TV set, not physically in .ru) ... even if the freshmen in the class were not conceived yet and the whole story is news to them.
If I wanted a CS degree I had to sit thru night school "what is an IP address" class even if during the day time I had a fat stack of current Cisco certs.
That was my general method at school. I found it works for most exams as it is pretty well established all the material they might cover and you can read the notes version in a day.
But the irony for me was, as a language and linguistics major, I liked the difficulty of CS because this was not in my grasp.
I ended up in IT, and I routinely tell people how I failed a midterm in Intro to C++, I had to wind out like a 30 line C++ function with arrays and pointers tracing and writing out the correct cout<< output, only in the last five minutes of that midterm realize I made one trivial error like on the third of these thirty steps.
And so a score of 68 it was. But my god did I love Solaris, g++, and the Unix way well after that.
That also has "higher level" problems. So adjunct history prof A convinced his students to spend, on average, 6 hours watching videos per week and adjunct prof B only motivated his kids to watch 2 hours, obviously the contract we'll renew is for A, heck we should put A on tenure track, what a motivational educator he is!
If the reason were collected and analyzed, it might be the case the prof A doesn't actually speak English or is utterly incompetent about the topic, so the kids are self teaching (true story from my youth!)
Now there may be anecdotal verbal end-of-semester survey results, but thats just meaningless prose from kids, but the number of hours of video watched is a number, therefore it is meaningful...
Because if a student fails the exam and only studied at the last minute, that is a totally different scenario from failing but diligently reading the coursework as expected, even re-reading above and beyond. The latter case suggests the student just does not understand the material and would benefit from some one-to-one time with a tutor, whereas the first case suggests the student is lazy and needs a slap across the face.
Of course. And that would show up in the statistics. If most of the students in a class who studied with it are failing, rather than having a normal distribution of grades, we know the book is not doing its job. The educator still gets useful information, and would know whether to continue using that textbook next term.
Like everything else - used right it is an aide, used wrong it is a negative.
If your goal is to pass tests you're missing out on the point of education. Which is to learn stuff (usually, at least).
The tests are the metric that we measure success with, they are not the goal. Confuse the two and you will most likely continue to confuse the two and you will find yourself writing extra lines of code or whatever metric you're going to confuse for a goal in your professional life as well.
Used right, this could replace part of the focus of tests and instead put the focus on learning. Which is way better. Used wrong, it could become another metric that people want to maximise, because they've confused the metric with the goal.
Metrics are indicators of success/failure, they are not success/failure, so if you try to optimise towards metrics you ultimately just screw yourself.
Use quantitative metrics to identify areas of greatest opportunity, qualitatively inspect and analyse to improve. Repeat. That's the most effective way I know of learning. Used that way, metrics are useful, and having more metrics is good, not bad.
Except for the parents, it's nobody's business what the children do after school. How can children learn about freedom when all their movements are scrutinized and analyzed all the time.
Plus it's really not a good metric. There are many ways to learn a topic. There's fast learners, those who gather information trough other sources, those who like to work in groups. Inevitably these approaches to learning will get hindered by myopic interpretation of this metric.
"How can children learn about freedom when all their movements are scrutinized and analyzed all the time."
There's an implicit assumption there--consider that there is decent evidence that modern schooling (periods, bells, etc.) is a direct product of the need for factory shift workers.
There is no inherent good (from the government point of view--local, state, or federal) in teaching about freedom.
> Inevitably these approaches to learning will get hindered by myopic interpretation of this metric
I really don't understand this attitude. Of course this will not work if it is used incorrectly. Why would you expect any different? However, the same can be said for normal course work. What if the teacher gets the students to read only the first word of each sentence? This shows books will never work, and lecturing is the only way to teach.
Putting that aside, the point is that as you say, there are many ways to learn a topic. Asking children how they best learn things is not going to get useful answers most of the time, but analysis of reading and research patterns using sophisticated data mining or ML software will allow teachers to segment their students into groups that all learn in a similar way. So everyone benefits.
As I suggest above, I'd go further than instrumenting the ebooks. If they are on a tablet device using the Internet, then instrument the browser too, and you can correlate Wikipedia and Google searches with whatever page of the textbook they were on, and see what topics are unclear and always need additional research. You would also find out which students are deeply interested in the topic, as they read on past the end of the assignment, and download extra information or search for related topics. And you can discipline those who never open the book a all, and spend all their time looking at amusing cat pictures.
With such power there is also a responsibility to manage the student's privacy appropriately, bout with a school-provided device, similar to a work laptop, they should have no expectation of absolute privacy anyway. I think any worries about monitoring pale into insignificance against the powerful augmented learning regimes that can be created, and a suitable auditing and management routine will prevent any abuse.
I think because we're addressing a different point. Of course it's good to learn more about how humans work and help students to learn but there has to be a scope in it. Hopefully learning is about being curious, discovering new things, playing with the mind. Metrics won't help bored students, they'll just learn to flip the pages at the right time while watching TV. And now interested students get distracted by the same concerns of being over-watched and not doing the same thing. It's really a terrible solution.
If you want to collect metrics then make them as a personal tool. Make a utility for the student, that only him can access, that helps him learn about his studying patterns, give him hints, ... If they want to share them it's on their own term.
> Metrics are indicators of success/failure, they are not success/failure, so if you try to optimise towards metrics you ultimately just screw yourself.
True for the student, but true for the professor too... Why produce a bogus metric such as reading assiduity ?
Applied correctly, I can see it being useful in modeling machine learning algorithms. More data points are almost always a good thing there.
If simply dumped in front of school teachers and administrators, I suspect it would be harmful. The data will simply reinforce biases those people already have about certain students.
Interestingly, the data could be turned around on the teachers as well. A teacher may be so effective that students can pass courses without reading the material. Administrators might not recognize the astounding ability of the teacher and instead interpret it as a teacher handing out easy As.
I've always liked to imagine that if influential educators were genuinely interested in learning, they would make a better effort to account for Campbell's Law.
Perhaps there is an alternative explanation for the rampant misuse of metrics observed in most modern education systems? One that doesn't involve gross incompetence, I hope.