Hacker Newsnew | past | comments | ask | show | jobs | submit | gwerbret's commentslogin

> There are lots of examples of MAID being pushed upon people that do have other options and made to feel like it's the only one.

I'm not really surprised. It looks like Canada's healthcare costs are growing exponentially, and are outstripping growth in GDP. These costs are mostly driven by hospitalizations. If a government can carefully promote the message that hospitalization means suffering, suffering is hard, a life with suffering is not worth living, and that relief is quick and easy, then a route is charted to a reduction in healthcare expense. It would certainly help if the large physician organizations are on board, and the nation's major broadcasters lean into euthanasia-friendly messaging.


What is the source that the healthcare costs are growing "exponentially" and are outstripping growth in GDP? I would accept that its increased but definitely not exponentially. As well, I live in Canada and have not seen any such messaging that you have said.

> What is the source that the healthcare costs are growing "exponentially" and are outstripping growth in GDP?

See here, in particular the first figure: https://www.cihi.ca/en/national-health-expenditure-trends-20...

And here (slightly dated, but still valid): https://www.fraserinstitute.org/studies/sustainability-of-he...

> I live in Canada and have not seen any such messaging that you have said.

Here's a not-particularly-subtle example: https://www.cbc.ca/news/canada/new-brunswick/maid-medical-as...


Neither of those links give us evidence of "exponential" growth as one would normally define exponential growth. I did agree that it definitely has increased, just not exponentially. As well, the first link demonstrates that the GDP has increased greater than the healthcare expenditure. Only in the "forecast ed" area does it outstrip GDP as an annual percentage.

If you search the CBC, they have articles both for and against MAID. I think its kind of silly to say that all positive news articles about MAID is government propaganda as there is likely to be a non-zero amount of positive experiences with it. Should the government not allow the press to make any comments on MAID to avoid biasing anyone for, or against it? For example, here is a negative video the CBC posted about MAID: https://www.cbc.ca/player/play/video/9.6521196


> There is a simple test the public can use for any scientific model: does it make accurate predictions, or not? You don't need to understand how a model works to test that.

It's quite obvious from your position on this matter that you're not a practicing scientist, so it's very unfortunate that your position is so assertive, as it's mostly wrong.

To understand the predictions, as it were, you do have to understand the experiments; if you don't, you have no way of knowing if the predictions actually match the outcomes. Most publications involve some form of hypothesis-prediction-experiment-result profile, and it is the training and expertise (and corroboration by other experiments, and time) that help determine which of those papers establish new science, and which ones go out with last week's trash. The findings in these areas are seldom accessible until the field is very advanced and/or in practical use, as with the example of GPS you gave elsewhere.

> The biggest problem I see with "establishment" science today is that it doesn't work this way. There is no mechanism for having an independent record that the public can access of predictions vs. reality.

There is; it's called a textbook.


> It's quite obvious from your position on this matter that you're not a practicing scientist

You're correct, I'm not. But I'm also not scientifically ignorant. For example, I actually do understand how GPS works, because I've read and understood technical treatments of the subject. But I also know that I don't have to have any of that knowledge to know that my smartphone can use GPS to tell me where I am accurately.

In other words, it's quite obvious from your position that you haven't actually thought through what the test I described actually means.

> To understand the predictions, as it were, you do have to understand the experiments; if you don't, you have no way of knowing if the predictions actually match the outcomes.

Sure you do. See my examples of GPS and astronomers' predictions of comet trajectories downthread in response to MengerSponge.

It's true that for predictions of things that the general public doesn't actually have to care about, often it's not really possible to check them without a fairly detailed knowledge of the subject. But those predictions aren't the kind I'm talking about--because they're about things the general public doesn't actually have to care about.

> There is; it's called a textbook.

Textbooks aren't independent. They're written by scientists.

I'm talking about a record that's independent of scientists. For example, being able to verify that GPS works by seeing that your smartphone shows you where you are accurately.


An example of this ideal can go horribly wrong is CERN.

There's one apparatus (of each type) and each "experiment" ends up with its own team. Each team develops their own terminology, publishes in one set of papers, and the peer reviews are by... themselves.

I don't work at CERN, but that criticism was from someone who does.

They were complaining that they could not understand the papers published by a team down the hall from them. Not on some wildly unrelated area of science, but about the same particles they were studying in a similar manner!

If nobody else can understand the research, if nobody else can reproduce it, then it's not useful science!

Note that this isn't exactly the same as Sabine's criticism of CERN and future supercolliders, but it's related.


I'm surprised by what you say, it is not at all my experience. Are you sure you are not over-interpreting what your friend said, or that your friend's experience was not unusual?

1) People at CERN publish papers in "normal" physics journals, which do the usual peer review. Few articles that I've myself per-reviewed were not from my own experiment. There is, of course, also an internal reviewing for each collaboration, but it is to improve the quality and something totally natural and obvious if you want to have a collaboration (by definition, a collaboration is a place where people read each other work and feedback to each others). But it is totally different from "the work is only reviewed by the collaboration".

2) I've worked ~5 years in one experiment, and ~5 years in another, and I did not notice any different terminology. In both experiments, I've very rapidly met and learned the name of people of other experiments working on similar subject. I don't know any workshop or conference where the invited scientists are not from different experiment. During these events, there are a lot of exchanges.

3) What is true, and it is maybe the reason of your misunderstanding, is that you are strongly advised to not share non-cross-checked material outside of the collaboration. The goal is to avoid biasing the independent experiments: if you notice a strange phenomena that will later turn out to be a statistical fluctuation or if you use a new methodology that will later turn out to have unnoticed systematical biases, if you mention this to the other experiment, you will "contaminate" them: they may focus their research or adopt the flawed methodology. But this is only for non-cross-checked and it does not make any sense to pretend that it has a negative impact (a lot of scientists, in collaboration or not, towards all history, don't like to share their preliminary results before they acquired a good confidence that what they saw it reliable).

4) Do you have example of things that one could not understand while it was done down the hall from them? I don't recall "not being able to understand" (the point of a publication is to explain, so people care about making it understandable). I do recall "harder to understand", but it was often from people from the same collaboration, and the reason was because of they needed to use some mathematical tools I did not know and that there was not really any other way.

I'm sure there are cases where two groups end up diverging and it makes the collaboration more challenging. But I really doubt it is not something exceptional, and that everyone in the collaborations will try to mitigate.

Your comment makes me wonder to which extend the outsiders of CERN don't have plenty of crazy myths totally disconnected from the reality. I guess it is a good example why people like Hossenfelder are a problem: they feed on these myths and cultivate them.


> journals, which do the usual peer review.

They don't though! They farm it out to expert physicists, which in the case of CERN research almost certainly also work at CERN.

> Few articles that I've myself per-reviewed were not from my own experiment.

But were they from CERN?

> Do you have example

This was a few years ago, it was a comment here on HN, but it would be hard to dig it up without an AI reading through everything.


Some important things not mentioned in this press release (not to detract from the idea of new treatment approaches of any sort):

- All patients had their tumors surgically removed before they were started on treatment. Thus the trial wasn't testing cure so much as delay of recurrence.

- These were very superficial tumors, meaning they were growing on the very surface of the inner bladder, just like skin tags. These aren't the ones that kill people. Patients with superficial bladder cancer who don't respond to BCG can be treated for quite a while just by having the tumors surgically removed whenever they recur (using a minimally-invasive procedure known as a transurethral resection of bladder tumors, TURBT).

- Fun with words: the press release called this a clinical trial, but it's not -- it has no controls, no real statistics, no randomization, none of the things that make up the usual standard in medicine. The authors of the paper call it a "study", which is basically a research experiment. They don't use the word "trial" at all in the paper.

Having said all that, I still look forward to seeing a proper trial.

Edit: wordsmithing.


> the press release called this a clinical trial, but it's not

Yes, it is.

Any intervention in humans that is meant to create generalizable information regarding a treatment intervention is a clinical trial.

The quality of the information is not as strong as a double-blind, placebo controlled, RCT, but it is still accurate to call it a clinical trial.


By the tenor of your response, I assume you understood what I meant in this very-non-medical of forums, which means you understood why the paper's authors themselves chose not to call their study a trial (even though they registered it as a "clinical trial", as is necessary for any clinical study in humans involving a treatment intervention). Which leaves me wondering about the purpose of your response.


I'm perturbed by PR-driving rhetoric in the medical world because of what this causes, e.g. another commenter asking about a family member and if this could be helpful. Seeing what isn't directly visible in the PR is important in this case.


> Used Tesla car prices are now down 4.59% year-over-year

Their graph shows a drop from $33k to $28k since August '24, or about 18%. Did they leave a factor of 4 behind somewhere?


Looks like 4.59% is not YoY but just the 90d figure, in the model-breakdown table you reach an avg of 18% which matches your estimate...


A lot of the purposes in education for which the use of AI would be considered "cheating" involve writing assignments of one sort or another, so I don't know why most of these education scenarios don't simply redirect the incentive.

For example, in an English class with a lot of essay-writing assignments, the assignments could simply be worth 0% of the final mark. There would still be deadlines as usual, and they would be marked as usual, but the students would be free to do them or not as they pleased. The catch would be that the *proctored, for-credit* exams would demand that they write similar essays, which would then be graded based on the knowledge/skills the students would have been expected to gain if they'd done the assignments.

Advantages:

- No more issues with cheating.

- Students get to manage (or learn to manage) their own time and priorities, as is expected of adults, without being whipped as much with the cane of class grades.

- The advanced students who can already write clearly, concisely and convincingly (or whatever the objectives are of the writing exercises) don't have to waste time with unneeded assignments.

- If students skip the assignments, learn to write on their own time using ChatGPT and friends, and can demonstrate their skills in exam conditions, then it's a win-win.

This all requires that whoever is in charge of the class have clear and testable learning goals in mind -- which, alas, they all-too-often do not.


A lot of students, even at the college level, don't think that far ahead and make bad decisions because of short term thinking.

Look at any list of advice for new college students and almost every one of them includes "go to class". Simply attending class is way easier than homework and yet, when there's no short term consequences for not doing it, plenty of students will just not do it.

Cheating is another great example. Cheating in college is rampant because kids don't want to do the work they're assigned. I don't understand the logic behind the idea that if you tell all the kids currently using ChatGPT to write their essays, "Hey, you don't actually have to write that essay at all" that you think they will somehow choose to write it anyway. They're already choosing to ignore the long term benefits of homework even when there are short term consequences, so I don't see how removing those short term consequences will make things better.

If you tell kids there are no immediate consequences for not doing homework, many of them just won't do it and they will fail because they haven't learned anything.

Maybe you're okay with that. Honestly, I'm not actually trying to convince you that it's a bad idea. I just think if your proposal is based on the idea that kids will choose to something boring that they don't have to do in the short term because it benefits them in the long term you're overestimating a lot of kids (and adults for that matter).


There's a section in Zen And The Art Of Motorcycle Maintenance (1974) where Persig takes this all the way to the final conclusion that there should be no grades at University, and no degree at the end, and then and only then will everyone who goes there actually be learning-motivated.


So. I teach at a university and I do give an "assignment" exactly like this.

In a few of my classes, I have final projects that teams work on. I also have presentations. I used to require them of all students; and quickly learned this is a good way to waste valuable time.

Now, all my presentations are completely optional for NO CREDIT. You don't get penalized if you don't do them, and perhaps more importantly, I give ZERO EXTRA CREDIT for doing them.

As you can imagine, every single presentation I've gotten from this has been absolutely worth it.


I do the same in my classes, and it´s common practice in many courses my dept which may help, as the students know what is expected of them. I don´t think it's wasting time. It motivates the students to know that they have to present in front of their peers, helps the shy ones get practice, and yes the quality varies, but it´s a very good way to share information within the class about different projects, even with a not so good presentation.


What proportion of students bother?


It seems to go in waves? Rarely is it one or two, I imagine there's some peer pressure thing going on. Something like all or nothing.


What were the best three?


Hmm, I mean been doing this for years. Some were interesting because they DIDN'T accomplish much and things went bad but then we could kind of post-mortem in the class.

Others had some pretty cool things that ended up in real life; I believe the official timers for the Florida Supreme Court testimony things came from one of my classes.


which students fail your class?


I get very few failures, but that's a selection thing; it's a junior senior "big picture" IT class.


understood thanks - but in those few cases how do you even determine who fails?


The problem is that the motivation from above (i.e., administration, state legislatures, employers, etc.) is no longer really about learning. We could have an entirely learning-motivated university right now and it would be considered a bad thing by many powerful people because it's not aimed at "preparing people for the workforce" (in part simply by providing that degree).


The students also want a certificate for their efforts. It's impossible to avoid signalling.


> It's impossible to avoid signalling

You can take that one step further. What kind of signal does “I can afford to go to University and not worry about credentials” send? I’d argue that’s realistic only for people who are willing to admit that they belong to a leisure class. In the US at least, we like to flatter the leisure class with the pretense that they worked hard to get there.


I learned recently that most universities in Switzerland have open admissions where entry to the program is pretty easy. However they do not hold anybody's hand and you have to pass your classes or can get kicked out quite easily. I am not sure if what I am saying is completely accurate feel like this is one model that would weed out people who are serious from who aren't.


The Open University in the UK (which has been running for decades) doesn’t have any entry requirements for the first year of its undergrad courses and the early modules definitely include a focus on getting people up to speed on academic writing, use of library tools, etc. I don’t know how many people make it to reach the year 2 modules (which require passing year 1)


There are institutions like that. For example, the Collège de France started in 1530, still active, doesn't administer tests nor grants degrees. It's purely about learning.

https://www.college-de-france.fr/en

https://en.wikipedia.org/wiki/Collège_de_France


I was recommended that book many years ago, when I was far too young to appreciate it. Maybe it's time to give it another go...


This is great, but is incompatible with charging people a year's income for it.


university provides in the following order: prestige, connections, knowledge in exchange for money


I went to a state school, and didn't get much prestige or many connections. I did learn how to be an engineer, and more importantly I learned how to be an adult. I think my time there was worth it.

Maybe this is true of liberal arts or business degrees? I don't know, but I don't think this is the opinion of anyone who went to engineering school.


If you missed “practice space to learn how to learn and to work with other people”, your understanding is too flawed to forgive the obvious so-edge take.


I assume that would fall under the listed "knowledge" category.


Perhaps, but to my mind knowledge and skills are qualitatively different.


> I don't understand the logic behind the idea that if you tell all the kids currently using ChatGPT to write their essays, "Hey, you don't actually have to write that essay at all" that you think they will somehow choose to write it anyway.

I unironically believe if you tell all the kids they don't have to write the essay at all, much more will choose to write it.

Kids cheat not just because they're lazy. Cheating makes people feel smart. The fact you can get credits by doing very little while others work their asses off is rewarding and self-validating.

The big issue of exam-only approach is that a one-hour exam is not enough to evaluate a student's performance, unless your educational goal is just to make students memorize stuff by rote. I'd consider a 3-hour open-book exam bare minimal. But if every class does that it'll be too exhausting.


They will not choose to write it. Would you work on something consistently if nobody cared about it?

There needs to be a reward for doing essays. That reward can be emotional eg. "the teacher I respect liked my essay" or "my essay was read in class" or "the teacher gives feedback that makes me feel a sense of growth". In that case, maybe kids will do it.

However, I think it's hard for a teacher to inspire respect to a classroom and the difficulty scales with the number of people in the class, so grades are used as a hack.


> Kids cheat not just because they're lazy. Cheating makes people feel smart. The fact you can get credits by doing very little while others work their asses off is rewarding and self-validating.

I am 100% certain quite a lot of people cheat because they procrastinated and don't have time to learn. Or because they indeed were lazy to learn. Or because they cant learn, because they course is too hard for them.

Or because video games and youtube are more fun.


When I read this suggestion it sticks out that un-spoonfed, people with deficits in thier study skills, executive function, and institutional literacy would be most disadvantaged.

So, you have 2 kids who are equally bright, and you tell one "you don't have to do these assignments but there is a test at the end" and the other "you have an 80% chance if failing if you don't do these assignments. Analyse each assignments and feedback for shiboleths like the way they ask you to structure your introduction and optimize for demonstrating you know these shiboleths over everything else"

University is a wonderful petri dish for growing into who you want to be. You have access to expertise and resources abs a certain kind of institutional credibility. Few students actually use these fully and the ones who do were told to. You need some idea who you want to be and why, and this is developed in you by other people. Children don't just know stuff.

I think these are positive changes if and only if we accompany them with systematic study skills and self management courses and bridge this gap.


Do Universities no longer do that? All of my finals were 3hrs. There was a special schedule during finals week with 3 slots per day. The time of your final exam was based on when the first lecture session of a class took place. Really sucked to get an 8-11 AM slot when your classes never started before 11.

Fun prank: set all of the clocks in your dorm neighbor’s room to different wrong times. Guy across the hall knew we were messing with him, trusted his watch - which had the correct time, but wrong alarm time. Realized he had a problem when he had hot water in the shower and no one was around. He was only 45 min late to the exam. Good times.


Different groups have different standards of course, but that prank seems pretty cruel.


I'm a little confused so I could use some clarification: where did the "fun" in the fun prank kick in? You caused him to be late and risk his exam. Could you break down the fun for me?


Unfortunately English "fun" is used both for good wholesome fun and for the cruel fun that is "making fun of" people (laughing at their misfortune).


I don't see how potentially ruining someone's exam classifies as fun


We had the alarms going off early. Like every half hour from 6:00 AM. We knocked on his door and his roommate told him when he was leaving for breakfast.

He did fine in his exam. 3 hrs was overkill. Sometimes you can be your own worst enemy.

It was the 80’s. I guess kids these days are soft.


This is sociopathy, not a fun prank.


Well, agreed, but nobody said anything about 60min exams x) In fact I don't remember ever having an exam at uni that was less than 2h.

I agree that open-book exams, or at least a closed-book portion followed by an open-book portion, is important to actually gauge the student's abilities rather than his/her capability to cram.


> I unironically believe if you tell all the kids they don't have to write the essay at all, much more will choose to write it.

I seriously doubt that. In my experience many students won't do anything that doesn't directly contribute to their grade.


Exactly. In school I only did the stuff teachers told me isn't important and I don't need to do.

You want me to don't know something? I better make sure to get to know everything about that. You push me to do stuff? Why should I care, if you already do.


I’d expect that a lower proportion of those who wrote it would have cheated, but certainly not that more would write it.


Well, neither in school nor in university did homework count for grades for me (growing up in Germany, and with some very rare exceptions).

So this isn't all that crazy.


During my undergrad in Germany, the CS department was in the process of switching from optional homework to various forms of mandatory homework (either directly counting towards the final grade, or requiring a minimum score on the homework before allowing registration for the exam). AFAIK this was because under the old system, there had been too many students registering for exams despite being woefully unprepared, and then predictably failing as a result.

I think optional homework works for classes that are obscure enough only somewhat intrinsically motivated students would consider taking them, but in mandatory classes or trendy majors, there's going to be many people who need a bit more external motivation to study.


I studied math, and all our exams were oral exams. The professor had to actively accept you for the exam, which was usually a given, if you did your homework. (But you could probably get into the exam without doing the homework, too, if you convinced them.)


I have been teaching CS at German universities for close to two decades now.

> AFAIK this was because under the old system, there had been too many students registering for exams despite being woefully unprepared, and then predictably failing as a result.

True. That's the real Dunning-Kruger problem: incompetent people do not know how much help they need to get competent. It is our job to show them their weaknesses as early as possible so that they can effectively work on them.

(I believe that state-funded universities (as in Germany) have some obligation to not only educate the self-motivated top 1% but also offer a solid education even for less perfect students - at least if there is a societal need for their competences.)

Another, more important, reason is that written exams are not good tests of programming competence - especially as tasks and frameworks get more complex. We want to assign good grades to students who are competent at developing software in realistic settings, not in highly artificial exam settings.


> That's the real Dunning-Kruger problem: incompetent people do not know how much help they need to get competent.

Of course, the paper by 'Dunning-Kruger' never showed anything like that.


Huh - what do you mean? I just checked again, and this is IMHO exactly what Kruger and Dunning reported:

From the abstract:

"Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities."

https://gwern.net/doc/psychology/1999-kruger.pdf


Oh, that's what they say in the abstract. But have a look at the actual experiment and analysis.

All they found was a statistical artefact.

You can just have a look at the big picture at the top of the Wikipedia article: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

> Relation between average self-perceived performance and average actual performance on a college exam.[1] The red area shows the tendency of low performers to overestimate their abilities. Nevertheless, low performers' self-assessment is lower than that of high performers.

So regression towards the mean explain the entire effect.

See also https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect#...


Thanks a lot for the explanation and link. I had read the original papers a long time ago, and was not aware of the more recent discussions. That said, I just read a few of the critical papers, and it seems that even Gignac and others do not dispute that the effect is observable. They just don't believe that unskilled people are inherently worse than skilled people in estimating their own skill but that all people overestimate their skill (better-than-average effect).

This is still very much compatible with my claim that unskilled people profit from being reminded (repeatedly, not just in the exam at the end of the semester) that they know less than they think. I will avoid conflating this with the Dunning-Kruger effect in the future (Thanks!).

An recent study found that medical students' estimate of their own intelligence gets lower right after taking an IQ test (confirming the better-than-average effect). But one week later, their self-estimated intelligence returns back to their pre-test levels. To me this suggests that students (and all others) will overestimate their abilities - and invest less time in learning - if they are not constantly given feedback.

https://www.sciencedirect.com/science/article/abs/pii/S01918...


Right - for me each year of university (in UK) was a year of learning with exams at the end, that was it, and it was normal.


This, in general, seems like a great thing. The goal of a university should be to produce premium students, and nothing's better than a trial by fire.

We actually had this exact thing at my university. One sophomore level weed out class was a "self paced" electrical engineering class. It was called self paced because you were given a textbook and were free to work through it at your own pace. But to finish the class by the end of the semester you had to average 2 chapters completed per week, and completing a chapter not only included finishing a problem set and taking a test which you had to score 90%+ on (and were required to finish another problem set and retake it otherwise), but on occasion also demoing some skill in the lab.

It was brutal, but one of the most educational classes I've ever taken - and obviously not just because of what I learned about electrical engineering. Of course it seems modern universities have just become profit-driven degree treadmills. Weeding out students? That's reducing profit! And yeah looking back at my uni's page it seems this class is no longer self paced. Lol. And that's at a top 10 school. The enshittification of education.


Part of the issue is with the purpose as you describe it. Sure, at top 10 schools, a trial by fire would result in much needed “growing up” as the gifted but undisciplined (speaking for myself and many users of this site) students find their way to more durable motivations. But at the vast majority of schools, a trial by fire would end with a lot of students burned.

Perhaps that begs the question, if those kids can’t handle self-directed education, why are we putting them there in the first place, but that’s definitely a grey area, and there are hundreds of thousands of students who are smart enough to do well in higher education and skilled work, but weren’t disciplined enough to handle what you’re describing as freshmen.


Many employers pay a premium for predictably elite cadres of students. The schools want to try to pass off mediocre graduates as having some of the elite special sauce even though only a small number of students have what it takes. We know exactly what to do to produce elite cadres by aggressive sorting. But the incentives created by the federal government encourage the institutions to extrude mediocre students like a chicken nugget machine produces processed meat product. Every hot student-nugget is worth a tens of thousands of dollars a year in freshly printed loan money directed towards administrators and rent on dorms and apartments irrespective of quality; so the incentive is to stuff the students with filler.


The idea of "weeding out" students implies that many students are "weeds" who need to be uprooted and thrown away rather than grown.

A teacher who thinks this way is probably in the wrong profession. A university that operates this way is failing to educate the students it admitted.


Weeding out as I've seen it is a class that requires a certain level of commitment and ability to either plan your work or tough it out that a high school just can't really prepare anyone for. So in a way the student isn't a "weed" but their motivation or maturity might be and they're free to retake the class once they know that university will require them to put in more work than high school. If they can't put in the work then completing a thesis and graduating is going to be very hard and that happens the last year of uni so better to set the expectations early with a "weed out" class.


Ideally it's not weeding out but distributing into education paths which fit every student.

From my experience studying electrical and computer engineering, I definitely prefer that they chose to put hard electrical engineering courses in the first semesters because I knew immediately not to focus on them because I didn't like them.


I think there should be a better onramp to EE, as there often is in CS.


I think the problem is that no teacher has the time to babysit a student. If they just don't care about their education or can't put the time in, they shouldn't be wasting their time and money.

Some students also just don't have the aptitude for an Engineering or Computer Science degree. It's better for everyone if this is figured out early. I know plenty of people that dropped out of a Computer Science degree because they hated it or thought it would be a great way to make money and were in over their head.

We had classes that were for 'weeding out' students in Computer Science. They involved calculus because if you couldn't pass this class, you wouldn't be able to handle the 5 or so classes after this class that required it.


I studied computer science and have been working as a programmer for about 20 years. The downside is that you're filtering a lot of people who would actually potentially be great programmers but are for whatever reason not good at calculus.


We have too many university graduates that can't get jobs in their fields, in a time where there is a growing deficit of people in trades.


Either the unfit and uninterested get weeded out at the education stage or they get weeded out by no employer being willing to hire them; the former seems kinder than the latter.


Who are the "unfit" students? Why were they admitted and what do you think should happen to them?


my kid attends a school in which they’ve given up on lectures. each “class” is basically a proctored mini self learning test from a booklet that’s a mix of content and exercises to work through individually. a teacher is around to answer questions and grade the booklets.

many kids fail to make the transition from spoon-feeding to self-learning, but those who do then begin to realize that they can go as fast as they are able and need not follow the herd. they also develop a strong sense of whether they’ve understood each booklet or not. it leads to a competition for learning fast AND well because there are also traditional proctored checkpoint exams from time to time plus kids do the ordinary standardized tests to calibrate.

i feel it’s an excellent system that prioritizes learning over conformity though it is obviously not a candidate for mass adoption because many kids wash out after making no progress for a while.


Dealing with untreated ADHD through college, "do the ungraded homework and spend time with the TAs" was way more valuable than "go to class". lectures for me were borderline useless. Fortunately this was something that I figured out on high school.


On the class topic, I suspect that attendance was more impactful for students pre-internet as the alternative was to wade through the library piecing together material.

With lecture notes/slides available online, well prepared books and study forums readily available - in-person attendance can feel archaic.

We may be experiencing a similar dynamic in education with AI. In a world where we can create individualized curriculum’s for each student encompassing the entire tree of knowledge - Perhaps it’s time to rethink how we educate students rather than push them into lecture halls designed for the Middle Ages.


Here's an alternative hypothesis...

People thrive under regularity, and young people (especially) tend not to understand that. Similarly, being able to focus on a single thing is a kind of super-power, while multi-tasking generally hurts performance on tasks.

Going to class (and paying attention) means that you've got a regular period of focus on the class topic. That combination of regularity and focus translates into long-term learning and better performance.


personally, id resent paying thousands of dollars a year to be given textbook sums to complete... i could have downloaded that myself, wheres the actual value these educators bring?


This would mean moving to 100% weighted exams, and there's good reasons why there has been a general trend away from that over recent decades. For one thing, some students simply perform better under pressure than others, independent of their preparedness and knowledge of the material.

Mind you, I don't really have any alternative suggestions.


> Mind you, I don't really have any alternative suggestions.

This is thing.

If this choice is between:

1. A gameable system that will be gamed by most students.

2. An ungameable system that will unfairly punish those bad under pressure and time constraints.

There isn't really a choice at all.

One option would be a school-provided proctoring system, allowing teachers to outsource the actual test-taking times. It could be done outside of class time, at the student's convenience, and they could have 3-4 hours if they chose.


> An ungameable system that will unfairly punish those bad under pressure and time constraints.

Given modern communication technology it’s still gameable


With a proctored in-person exam, we're talking about the difference between gaming the SAT, say, and gaming a take home English essay.

And rates of cheating of "well under 1%" vs "well over 50%".

Even if we allow for less rigorous proctoring standards, we're still probably talking about "2-3%" vs "well over 50%".


"Can they do this under pressure?" might in fact be a good question to test for and train for. A lot of real-life activity after graduation will involve some pressure.

But we could do what I'll call a "monastic exam".

You've got a week, not an hour, but it's in a little monastery and you don't have your phone or other unapproved tools.


One of my freshman professors accidentally did nearly that. The final exam was 3 hours. This was normal at my school although many students finish in 1-2 house. After realizing nobody was close to finishing after 2 hours and he had greatly underestimated the difficulty, he expanded the time limit to 6 hours!

I will say it's not practical to have exams that long. In this case, the dorm required me to move out immediately after the exam and my parents were waiting to pick me up, so I decided to leave after 4 hours to avoid unnecessary panic or having to drive overnight. In hindsight, the professor probably would have let me make a phone call, but that didn't occur to me at the time.


Oh I'd love to be able to assign Walden exams.


Fair point, but the solution I propose would only apply to those parts of the assessment involving solo writing assignments -- so excluding class participation, group assignments, etc. (Which is not to say that students can't use AI to cheat on these, but they have other solutions.)


I mean, the real answer is that the other students were cheating on their assignments. It's that simple. We keep making up excuses for all of this shit. Some people don't "test well". Turns out those people don't know shit.

Let's get real here. I know why these nonsensical memes keep propagating but dear god. People will just believe anything these days, including that gas stoves cause asthma or whatever other bullshit is being peddled.


This isn't true. I'm one of those people who tested remarkably well, and back in college would do fine on exams despite frantically copying all of my own (non-comp Sci) assignments. Better than my peers who knew more and helped me cram. Test anxiety is real.


I was a great test taker, I used to make a sort of game out finishing tests in half the time as almost everybody else and acing it at the same time. I also never crammed, never attended pre-test study groups, and sometimes made a show of drinking beers right before the test just to annoy the people cramming in the last minute.

But I'm not particularly brilliant, in fact I wouldn't be terribly surprised if I have undiagnosed ADHD. My test taking performance trick, which I freely told everybody to their annoyance, was very simple. I knew the material! Read the assigned texts, do the optional homework, pay attention in class. If you know the material you don't have to try to cram it into your brain in the last half hour before the test. If you know the material you don't have to try to reason it out from first principles during the test. You just go in, fill out the easy answers straight away, go back and do a second pass for the tricky questions, and that's it. If you have to sit there wracking your brain for 30 minutes on a single problem it's because you already fucked up with how you approached the course weeks ago.

Again, I'm not special for this. There were a handful of other students who were as fast as me. We'd sit in the hall waiting for our friends, look at each other and say "you knew all this stuff too, huh?" "yeah of course"


It is definitely not the case that if student A performs better on a timed high-stakes test than student B, that means A must have worked harder / prepared better / know the material better / etc. than B. Some people are very skilled at bullshitting their way through stupid school tests, and others are not. Very few school tests are well enough designed that they can effectively measure the intended target of how well someone understands the topic, content, and course-specific skills which are being intentionally trained in the course.

Bullshitting though tests is a learnable / trainable skill, but schools generally do not teach it very coherently or well and most students do not deliberately practice it. It generally doesn't have that much to do with the content or other skills intentionally taught by any particular course or by schools in general (there's decent overlap with the skills involved in competitive debate and extemporaneous speech, which some students participate in as an extracurricular activity). Rating students on how good they are at bullshitting their way through exams is sadly a significant part of the way our education system is focused and organized, but in my opinion it is not a valuable or particularly valid approach. There are certain professional contexts/tasks where this kind of skill is useful, but developing it per se shouldn't be the focus of the education system.

Sometimes this and related skills are summarized as "intelligence" ("oh she aced the test without studying, she must just be really smart", etc.), but in my opinion it's quite a misleading use of the word.


> For one thing, some students simply perform better under pressure than others

Learning to perform under pressure is the main purpose of attending college.


This is an example of a very limited social darwinism. Basically the idea is to remove a lot of enforcement and rules in some activity, or maybe even all, and then "free market" will regulate itself, with "deserving" students managing to manage themselves, and "undeserving" ones lets behind.

But the point of the university is not only teach English grammar and math operations, but also to work in teams, manage yourself, etc. The social stuff. And I suspect a significant number of students benefit from it. And I also suspect that by doing this at scale, the whole society benefits on average.

Removing all control and only checking the knowledge during the exam would lead to a lot of students never catching up. It is likely that it will also lead to the top students being more and more lax and eventually also falling behind.

The whole idea hinges on the base motivation - why do we need primary/secondary etc. education at all? To produce a dozen elite self motivated geniuses per year per country? Then your proposal would work perfectly. Or maybe motivation is different?..


> the assignments could simply be worth 0% [..] that the proctored, for-credit exams would demand that they write similar essays.

We run university programs at my company, and arrived at this bit of insight as well. That said, some of your points are incorrect or incomplete:

- You can't build systems assuming responsible individuals. These systems are guaranteed to fail. Instead, assume individuals are mould-able, and build a system which nurtures discipline towards goals. This works. - There are still issues with cheating, but it's more of an older way of thinking, that we developed methods to reset. - Advanced students need to be given more challenging assignments - quantum of assignments should be the same no matter the capability of students. This solution was unworkable until GenAI came about.

Looked from a pure individual skill-building perspective your ideas are alluring, but if one looks at completion rates of any online courses (Udemy/Coursera - under 4%), then one understands why physical cohort-led education system can work.

Happy to chat with anyone who'd like to delve deeper on this.


if one looks at completion rates of any online courses (Udemy/Coursera - under 4%)

As someone with a 96+% 'failure' rate on Udemy/Coursera I honestly don't see the relevance of this statistic. Most people going to University are there primarily because they want/need the degree. That piece of paper is really valuable, perhaps even more so than the knowledge gained. The piece of 'paper' offered by Coursera/Udemy etc. has basically zero value, so the people taking those courses are doing it almost exclusively for the knowledge they offer. Once you've learned what you wanted to learn from the course there is very little incentive to go the extra mile and go for the 'completion'.


The piece of paper is valuable because it represents a sustained effort of learning over an extended period of time.

I understand how from an individual's pov what you said makes sense. Similarly I hope you understand why from the system's perspective: it's the effort that's mandated and not just the proficiency.

Employers and others (higher education orgs, etc) care a lot about sustained effort, alongside proficiency. Only proficiency-focused systems (like Udemy/Coursera/Youtube) are not respected as credentials, since they do not showcase this.


I give university courses in United States. Many of us have certainly down-weighted homework substantially.

However, when some colleagues tried homework as 0% for introductory courses, most students omitted the homework, then failed the exams. Modern students seem to require explicit incentive to work, otherwise the usual: scrolling upon flat screen devices, hedonism, and so forth.

In this case, who has failed: the student, or the professor?


In my experience (about 2 decades ago) in a group of 20-30 students only 2 or 3 are able and willing to do homework. Most students just find someone else and copy from them. The real learning happens when preparing for a big exam.

And to pass an exam students have to prepare for the exam. Homework will only help there if it is similar to the exam.

One time I had to evaluate a written exam where the professor had set up a trap. There was a question that looked like a standard question from homework, but if you used the standard-techniques from the course your calculations didn't work - it was a nasty special case. Most people that started with that question just burned 30 minutes without getting anywhere... a lot of students failed, but at least they learned something about life...

And Oral exams are different. Giving a quick well prepared answer and being able to solve difficult tasks over a few days are completely different skills. Students there prepare for the professor. There are transcripts of previous oral exams. And professors change over the years - the final tough question of an excellent student will a few years later become a starting question. People that didn't know that game and didn't have access to any transcripts were in serious trouble... None of the Homework would have helped in the oral exam.


"In my experience (about 2 decades ago) in a group of 20-30 students only 2 or 3 are able and willing to do homework. Most students just find someone else and copy from them. The real learning happens when preparing for a big exam.

And to pass an exam students have to prepare for the exam. Homework will only help there if it is similar to the exam."

That's not learning really. I can confidently say that because I was the one who unfortunately regressed to this during the uni, and the same story with my peers. One simply can't prepare to the multiple exams sufficiently in a few weeks time (or less). So the only path left is hysterical rote memorization of as much material as possible to squeeze in at a passing grade, and then immediately forget all of the materials in a few months time. Burst of "learning" twice a year for a short time doesn't translate into real learning.

And that's for some simple courses during first few years. Specialist courses later in the program sometimes are impossible to rush "learn". When I tried to pull this off for Probability Theory course, I've failed spectacularly to get even lowest passing mark at first try. And others failed the same way.


Rushing is not good. Some People with good results started early and spent a lot of time reading transcripts of oral exams - and copied homework from other people. Some of them are professors now.

If good results are important, it's most important to know what happens in the exam - and adapt professionally. Learning is fun. I myself always did a lot of homework. But I wish I had been more professional - constant challenges are fun, but most of it is not very time-efficient.


This exactly how one of my English professors structured his class. The students would have to do the research beforehand and come in on test day with their works cited page completed. The actual paper would be written by hand during class time. You were only allowed the blank green book and a couple of pages of notes with direct quotes to incorporate into your paper.

He wasn’t worried about llms, they were not around, but plagiarism. It worked well.


That's pretty much how I teach my programming classes. Assignments are worth zero, or sometimes very little.

The difference I notice with AI is that the bell curve is nearly inverted. You have the good students, who use AI to support their learning. You have the students who let AI do their assignments, and then fail miserably on the exam. And there is hardly anyone left in the middle.


In my (limited) experience, programming classes, especially intro level, often end up with a binomial-ish distribution anyway. I was casually assisting some research on why this is when I was helping teach labs and such so was interested. I'm sure more research happened after I wasn't doing that any more, but I remember the best way of removing this at the time was catchup classes.

A lot of intro programming builds directly on previous lessons, much more so than, e.g., maths. If you missed how variables work (off sick, just didn't get it, whatever), you're still stuck when it comes to functions and anything else following and then you're going to fail - it was quite predictable. We studied other university courses and nothing came close to the pattern we were seeing, except "computing for chemistry" or something, which was basically the same sort of course just in a different department.

So we added explicit catch-up classes a few days after a topic was covered so if you missed it, you could get quite personal help on getting back up to speed. This really shifted the distribution to the right, then the people who failed were either those who just didn't care, or those under more extreme circumstances where this couldn't help (or those who just could not learn programming for love nor money but that was rare ime.)


> often end up with a binomial-ish distribution anyway

I think you meant "bimodal", not "binomial". They mean roughly opposite things here. :)


You are right! Bimodal indeed, thanks


It used to be like that, and I'm old enough to remember why they changed: not every student handles exam stress well. And it has nothing to do with their competency in that subject matter.

For example, in the UK, it was shown that biasing course results towards exam marks caused woman to perform worse than men. But when results included assignments, women generally performed better.

This is obviously a generalisation but it is one of the reasons why so many courses now take assignments into account for their final grade.


In my undergrad, a few decades ago, it was typically the case that assignments and exams both were a part of your final score. Often it was something like 40% exam/60% assignments, but this could change.

However what you mention about different people being better in different circumstances reminds of what our maths courses typically did, it was called "plussage" IIRC. Basically, the scores were calculated, and you got the best score from a 40% exam/60% assignment weighting or a 60%/40% (or something, the exact values are lost to time.) So if you were bad at exams but had done the work through the semester, you got a boost. Or if you were bad at deadlines but had still studied, you weren't (too) penalised.


A couple of points here:

- People need to learn how to write. The quality of student writing was one of the biggest criticisms of students when I was in university, and that was 30 years ago. Writing will only improve with practice and someone to evaluate it. Very few people will be able to learn how to write properly by reading about it, and even fewer people will even realize that you can learn how to write by reading the work of other people (which is important for learning about style in a particular field). For most students, even well meaning ones, no grade means no work done.

- A certain segment of the student population will find ways to cheat anyhow. All you have done is raised the bar so that, hopefully, fewer people will cheat. Quite frankly, I don't know how helpful that is if the "top" of the class moves on since the top of the class tends to be defined by their GPA.

- Test anxiety is a real thing. Different people go to school for different reasons, not all of which lead to high pressure careers. Do we really want to limit who can effectively access an education because of that?

There is no easy solution to this problem. Likely the best solution would be to remove traditional assignments and exams from the loop altogether and having students work directly with their instructors. Yet this has it's own set of problems (it assumes both parties are honest, it is difficult to ensure consistency in the delivery of curriculum, etc.).


So one bad day can ruin your marks.

It’s also a disadvantage for people with test anxiety.


You can have multiple tests throughout the term to bring down variance. Personally I love tests, and I think everyone can learn how to perform well.


It's impossible to design a system which is perfect for everyone. People with attention disorders might feel the opposite and will do better with the pressure of a test.


That’s why the current system has both.


I had to do these for a couple college classes (The original OpenAI GPTs were just released around when I graduated, I remember reading about them and then avoiding pytorch because the wheels were a pain to build.)

You have to get a special blue book with a couple blank pages and then write an essay with the prompt that's given at exam time. Then you turn in the book at the end of the exam. I think it's a great idea and was surprised more classes didn't work that way but I guess it's like you say: grading written assignments like this is a lot of work.


I hated all my proctored essays for the simple reason there's no ability to research things so it feels like the only arguments you can present are rhetorical or using made up statistics.


This is a very sensible proposal, however it falls flat when considering that many students who have paid for a university "education" feel entitled to a degree at the end of it, regardless of how much effort they've put in and whether they have learned enough skills to justify one.


I don’t see the issue with those people not getting what they want.


Nor do I, but the university administration might see things differently to us.


Due to my ADHD, I would try to learn 12-72 hours before the exams. I would fail quickly, hopefully someone could help me recognize why.


writing classs should probably be transformed into prompting class lol. Train students to prompt with clarity and be able to prompt AI to write high quality essays


And then have AI summarize those high quality essays for grading! It's like the inverse of compression!

In all seriousness, I don't see the value in this at all. Why would I want to know a statistically likely essay? Wouldn't I rather know what the student thinks?


Maybe someone in the field can speak up -- I'm not sure what is new about this study. It seems to be about an analysis of the double-slit experiment using individual atoms, and the press release implies that this is novel, but that experiment was first done over 30 years ago [0]. Is there anything to this study that is actually new?

0: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.70...


> We are well on our way to being a rich country, but not there yet.

Q: Why will Ireland eventually be the richest country in the world?

A: Because its capital is always Dublin.

(I'll see myself out.)


You must have been sitting on that one a long time :-)


Pedant here!

> NIST researchers have made the most accurate atomic clock to date — one that can measure time down to the 19th decimal place.

That's precision, not accuracy.


Nope, it is not.

A single measurement cannot be precise. Precision is a measure of how close multiple measurements are to one another. Accuracy is how close a single measurement is to its true value.

A clock that can measure a point in time to 19 decimal places with respect to its true value is accurate.


there is no true value, so it's not accuracy

it's precision


What is that true value? And was it accurate?


Unfortunately, I am not a time lord, so I don't know. But I do know the definitions of these words, and that is what they are. You are free to argue with someone else about the true meaning of time.

A single measurement can _never_ be precise, it is simply not possible.


Tell me you have never shot a rifle for score, without saying you haven't.

You can be accurate, precise, or both.

Said clock may be precise, but not accurate.


I think you can still get a real sense of overall societal trust from such stories. Not too long ago I horrified a relatively-young person by explaining how in the not-too-distant past, people used to have their names, phone numbers and home addresses automatically listed in a large, White-covered book whose Pages were distributed freely to everyone in the city.


Why were they surprised? The fact that it's printed? Or were they just not aware that the same information (and more) is freely available on people search websites today? (one of which has the same namesake as that large white book)

Either way, I think trust now and always in the US has been driven more by the urban/rural divide than anything else. Even as this article points out, this was primarily a rural phenomenon. When you know your mail carrier on a first name basis, things are a lot different.


> Or were they just not aware that the same information (and more) is freely available on people search websites today? (one of which has the same namesake as that large white book)

This -- while people realize that Google et al. have hordes of personal information about them, they don't expect that information to be available to the general public (thus the horror). Similarly, I expect people would be horrified to find out just how much personal information the data brokers have. There's an aspect of cognitive dissonance at play.


The journal is not Nature, it's Scientific Reports. This is a journal by the same publisher as Nature (which is why it's on the same domain) that will publish anything deemed "scientifically sound" by the reviewers, regardless of the (lack of) novelty and significance. It is very much not Nature.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: