I worked for a 996 founder in a faced paced startup recently - after busting my ass for 5 months he fired me for not delivering fast enough despite creating an entire platform from scratch
Now I’m was old enough to realize the risk- but given this job market which absolutely sucks for developers but I see young twenty something’s getting influenced by stupid catchphrases like 996
You'd get a stack of 120 blue books to grade in a week's time a few times a quarter.
The grading was entirely just checking if the student used a set of key words and had a certain length. This was a near universal method across the University for blue book exams.
Honestly, an LLM would be a better grader than most stressed out grad students.
Everyone has been phoning it in for a few centuries now
No issues to me in using LLM for suggestive grading assuming we have some evidence on its grading rubric and paper trail to audit for appeals to human review - ie human teacher is responsible not LLM
Any audits will be quickly farmed out to yet another AI for review, is my guess.
I'd imagine some system like YTs appeals system, where everyone is maximally unhappy.
One anecdote from my SO's time as a grader was that pre med students were the worst. They would just wear you down to get the best possible grade, appealing literally every missed point ad nauseum. Most profs would give in eventually in the undergrad classes and not deal with them. Of course further emboldening them.
No other major was like that, only those dealing with the future hellscape that was US healthcare.
I'd imagine that, yes, eventually your appeals in the AI future will end up at a prof, but delayed to hell and back. Even paying $200k+ won't matter.
> They don't want to grade papers by hand anymore.
This is only half correct. Grading by hand isn't an issue. Reading students' handwriting is the issue. Having to read the hurried scribbling of dozens of students is a huge challenge for teachers, who were already struggling grading typed papers on a deadline.
Back in the day we were writing code on paper (or on punched cards, using them as a paper substitute, as there were a lot of them left over from the Soviet times and they looked very "computer-y"), so even during computer classes you didn't necessarily need a computer. Not that I really think that it can still work in the year 2025 and beyond...
I was just talking to younger coworkers about this recently. Mid-90s to early 2000s: FORTRAN, COBOL, C, and C++ classes all had handwritten code parts for homework, handouts, exams, etc. This wasn't just pseudocode, you had to have full syntax, variable declarations, correct spelling of functions, etc. You frequently had to show code optimization, debugging, etc even on paper. Wild times!!
* All of those classes also had lab time (some dedicated, similar to a chemistry class), info on how to get the IDE if you had $ access to a computer at home, and alternatives as well.
Personally, I see more value in pseudo code (written or typed) and sketch type diagrams (analog or digital) than handwriting code. However, it was WILD and amazing to watch the gray-hairs of those days debug your code on paper!
Studied Fortran 65 as elective, submitted assignment/exams by writing actual code with pencil paper. Never got access to the cool looking machines in the actually cooled room. I am not kidding that I really enjoyed that paper compared to my other papers.
In my Uni we still had some coding test done with pen and paper (2014-2018), and AFAIK, they're still doing them. I even done a part of an exam in assembly with a provided Xilinx PicoBlaze assembly mnemonics list.
I don't know why people demonize them. If you know the syntax you're asked for, you can write in that language, and if you were asked to write in pseudo-code some algorithms, you should be able without any additional computerize help.
I struggled alot with hand writing assignments and the greatest boost in my grade
and academic ability was getting my own laptop in highschool because of the writing.
So i really do not wish to see that backtracked. But i could see the internet being declared too destructive.
A computer without internet, a book, and ample time would have worked for me.
From a teacher's perspective, I'm sure the craft is a mess of bad school policies vs. so-called "best practices" vs. real learning science vs. government policies vs. ancient bad advice (eg. learning styles and tablets in classrooms) vs. personal opinion.
It's not like there is a senior engineer who's got mountains of expertise to defer to (like a software team would have). Teachers are likely given directives from their schools and get dumped a bunch of tablets and are told this is "modern" education and to just roll it out.
Anyway, to your point - top-down directives are what change schools. There has been success such as banning smartphones in Ireland & UK recently. Schools taking on the problems and then solving it themselves could go a long way, rather than waiting for government to mandate things.
The best format I ever learned math was with plain sheets of printer paper, essentially a page per problem letting me doodle the problem and really think it through freely. After working with the concepts we then logged on to Mathematica for visualizations to really cement the concepts.
Maths is probably the safest subject. Reading compression and writing is the dangerous stuff. Its arguably the most important subject in regular school, 2nd only to socialisation skills.
My best experience for book reports was by oral exam in high school with a class of about 30.
Everyone has independent work and one by one you are called to the teacher's desk. He would take your book, open it up to a "random" spot and read a couple of sentences and then ask about what is going on in that scene. Hard to bull shit.
This could be modified to be like parent:teacher conferences where appointment slots exist while everyone else is doing something else (lunch, another class, maybe scheduled after hours)
There was a time when governments, banks, corporations and institutions had big iron computers, and they were not in the classroom. That time was okay; education happened, and some people who went into computing did very cool things anyway.
How about be strict on spelling and grammer (sic) to have a GPA that accurately places students in colleges. The days of dunces getting 3.9 GPA and making it into Yale need to end.
So how does a radical materialist explain consciousness- that it is too is a fundamental material phenomena? If so are you stretching the definition of materialism?
I find myself believing in Idealism or monism to be the fundamental likelihood
well the hard problem of consciousness gets in the way of that
- I assume you as a materialist you mean our brain carries consciousness as a field of experience arising out of neural activity (ie neurons firing, some kind of infromation processing leading to models of reality simulated in our mind leading to ourselves feeling aware) ie that we our awareness is the 'software' running inside the wetware.
That's all well and good except that none of that explains the 'feeling of it' there is nothing in that 3rd person material activity that correlates with first person feeling. The two things, (reductionist physical processes cannot substitute for the feeling you and I have as we experience)
This hard problem is difficult to surmount physically -either you say its an illusion but how can the primary thing we are, we expereince as the self be an illusion? or you say that somewhere in fields, atoms, molecules, cells, in 'stuff; is the redness of red or the taste of chocolate..
whenever I see the word 'reductionist', I wonder why it's being used to disparage.
a materialist isn't saying that only material exists: no materialist denies that interesting stuff (behaviors, properties) emerges from material. in fact, "material" is a bit dated, since "stuff-type material" is an emergent property of quantum fields.
why is experience not just the behavior of a neural computer which has certain capabilities (such as remembering its history/identity, some amount of introspection, and of course embodiment and perception)? non-computer-programming philosophers may think there's something hard there, but they only way they can express it boils down to "I think my experience is special".
Because consciousness itself cannot be explained except through experience ie consciousness (ie first person experience) - not through material phenomena
It’s like explaining music vs hearing music
We can explain music intellectually and physically and mathematically
But hearing it in our awareness is a categorically different activity and it’s experience that has no direct correlation to the physical correlates of its being
You guys are aware of Advaita and neo Advaita right? It basically has been the perrinal philosophy underlying all subjective spiritual experiences from Sufism to Gnostics to Buddhism and the Tao
Of course it could all be claptrap that humans want to believe in but I find it to be pretty powerful and I think it is true
That’s called meta cognition (what humans do) not subjective experience - which is the feeling of what happens and sets animal or agentic creatures apart from rocks (not sure about plants)
Do you mean the bat has no subjective experience? If so - That’s a pretty extraordinary claim to make there and one that risks great ethical concern on the treatment or animals
If bats have no subjective experience it’s ethical to do anything to them but if there is than they deserve to (as all animals) be treated ethically as much as we can do so
IMO considering Bats to be similar to Mice -we’ve studied mice and rats extensively and while cannot know precisely we can be pretty sure there is subjective experience (felt experience there) ie almost our scientific experiments and field data with so called ‘lower’ organisms show evidence of pain, suffering and desires, play etc - all critical evidence of subjectivity
Now I don’t think bats are meta-conscious (meta cognitive) because they can’t commiserate on their experiences or worry about death etc like humans can but they feel stuff - and we must respect that
You don't need to know if it has a "subjectivity" to know if you can torture and kill it, you can rely on the writhing and squealing. Making up artificial distinctions and questions with no answers is just a conceit we get into, ultimately to justify whatever we want. There are too many people on the planet and we need to "process" a lot of life for our benefit.
Anyway, if there is no mind in the sense of a personal identity or a reflective thought process, then really you're just torturing and killing a set of sense perceptions, so what would be the basis of a morality that forbids that?
>Anyway, if there is no mind in the sense of a personal identity or a reflective thought process
I don't think "mind" is limited to those two things, and I think it may be on a continuum rather than binary, and they may also be integrally related to the having of other senses.
I also think they probably do have some non trivial degree of mind even in the strong sense, and that mental states that aren't immediately tied to self reflection are independently valuable because even mere "sense perceptions" include valenced states (pain, comfort) that traditionally tend to fall within the scope of moral consideration. I also think their stake in future modes of being over their long term evolutionary trajectory is a morally significant interest they have.
Saying it might be on a continuum just obfuscates things. What do you mean exactly?
If there is no sense of self or personal identity, how is that different than a block of wood or a computer? That there might be "mental" functions performed doesn't give it subjectivity if there is no subject performing them. And if there is no persistent reflective self there is no subject. You could call instincts or trained behaviors mental, activities of a kind of mind if you wanted to. But if it's not self aware it's not a moral subject.
Also the tech world Google, Facebook(Meta) in particular deserves part of the blame for this state of the world, by creating social media platforms that create willful ignorance bubbles around their users which allowed our stupidest people to feed themselves politically motivated conspiracy and lies over scientifically validated truths
I haven't done intense tests yet, but based on my preliminary tests, the output is about 80% consistent. The others are like suggesting additional changes.
I set the foundation for a Telegram assistant, a web app, and a desktop app, while ditching Figma, Notion, Slack and Pipedrive. Not bad for a fistful of tokens
The amount of bugs and tech debt boggles the mind here
hey (author here).
obviously, the rest of the content adds a lot of nuance to this statement. it's a bit provocative on purpose.
But in practice, now that I am working with it, what I needed from those tools already works, with no major bugs so far. I haven’t recreated the tools! just the parts I need to able to plug in and plug out features. Also, many of those features are usually available in great libraries (like Tiptap).
No major bugs maybe for you as a user, but I would bet there are some very serious security issues in multiple components. So I hope none of that code is reachable from the outside. Which ofc is already not true as you are processing data from the outside with llms.
I've been in the game for over 25 years and built plenty of architectures with over millions of MAU, so I am not super anxious about what I've built here. But any API I use can be breached. I would say the risk I take is about the same as using Airtable. And sadly, anything can happen. Also, I am not using it for financial transactions, mostly just deal flow material.
Now I’m was old enough to realize the risk- but given this job market which absolutely sucks for developers but I see young twenty something’s getting influenced by stupid catchphrases like 996
reply