Hacker News new | past | comments | ask | show | jobs | submit login
As Computer Coding Classes Swell, So Does Cheating (nytimes.com)
211 points by danso on May 30, 2017 | hide | past | favorite | 293 comments



This is hugely important as a lot of people see programming as a ticket into higher-end jobs and will spend as little time as individual contributors as they can. The idea that "oh, they'll get caught in the workforce" is not correct - quite often the plan in the mind of these folks is that they will try to fake their way through and be moved up into project management, technical marketing, etc. and never have their lack of these specific skills exposed again.

In my experience these people are very dangerous, as their motivation to avoid being forced to try to do the thing they were actually hired to do is very strong. So they will have a mindbogglingly strong motivation towards office politics, manipulation of grievance mechanisms, etc. relative to someone who is 'distracted' by the prospect of, I dunno, doing the work that they are meant to do.

I worked at a tutor at the University of Sydney in 1993, where a group of 7 students hired a professional programmer to sit the 'gatekeeper' practical exam. These exams could be retaken but you could not enter 2nd year CS without passing them regardless of your marks. They were caught (via handwriting analysis on the worksheets where you were required to show your notes, among other things!) and in some cases fought the university in court for years over this.


Here's the trajectory:

Fake your way through basic CS courses.

Find your way into group-work heavy projects in later years, possibly doing enough ancillary aspects of the work to be semi-useful. It will be likely that your other group members will find it easier to grin and bear it.

Optionally: do a Masters or PhD. Plagiarize your work, or potentially hire someone to do it for you.

Continue into the work force, focusing on office politics, group projects, putting your name on things that you weren't really a contributor to, etc. Write a lot of reports.

This is why we're not a profession, folks. I'm amazed at the number of people who got on here to theorize about every goddamn thing ('grade inflation!' 'what are marks good for anyway?' 'I use the web all the time to remind me of things I've forgotten!') aside from the content of the article, which is that there are tons of people going through coding classes who are apparently aiming to get through without learning any code and want to cheat.

Every time you get dragged through some pointless, patronizing and degrading interview process that assumes that you don't know your stuff - you can thank the cheaters and fakers, and the fact that our industry doesn't have the will to root them out.


A company in the Netherlands sometimes hires me to help them guide the interns. They are what is called a learning-company over here.

Out of the 100 IT interns every year there are usually about 5 that know more than only the basics. This has been a consistent pattern I've seen over the last 4 years. It does not matter what level education they are having. Many times I even had to explain what functions, conditions etc. are while they somehow still managed to get average grades for classes that require this knowledge.

I really do not understand how so many people manage to fool almost everyone for so long. It usually ends up being the case that I (because I get hired to guide them) or the about 5 good interns that are there do most of the work for them. Of course some of them still manage to setup a Wordpress website or write some simple HTML. Just enough to impress the company owners.

Another consistent pattern (as mentioned before) is that very often their end-goal is to become an IT-manager. These are usually the better talkers.

Of course this is all anecdotal, but I've asked many other people that guide IT interns and they noticed the same thing. Probably something that can be researched.

Now I may seem like the one that thinks very highly of himself (by reading my own text I can understand why it may look like this) and this is exactly what makes this difficult to explain to the company owners when I notified them about this. They just do not believe the 5 to 100 ratio and think I've some difficulty accepting people.


I've been an assistant on 1st to 3rd year bachelors courses in CS in the Netherlands (basic OOP, Software Architecture and Algorithmics like courses) and probably seen many of the types of copying and forgery. I can concur that most students don't really understand but just regurgitate what has been said even at Academic levels. On some Examples of what i've seen which sometimes leads to the types of interns you give:

- 1-on-1 copy paste, change the name: Easily caught by looking, no need for anything else. These will fail.

- mostly copy/past, changing names some locations: Easily caught by letting them explain the workings of their code, they will always fail. I've had one lazy student who could easily explain while copying but was just lazy out of hundereds. These can slip through if the course is understaffed or badly designed.

- Hand written assignments: Mostly parallel working when individual required, I'm not agains cooperation but mostly you could see that one student got the exercise right and the other just copied him/her and usually lacks details which are in the other assignment. It is very noticeable if they copied it due to similar layout and textual form. If the assignment is too simple this is harder to detect, but quite easy in algorithmics courses. These are far harder to detect and prevent, though individual checkups (at random) are a good way to test the true knowledge acquired. Needs lots of staff which is never there since 'budget cuts'.

The biggest problems are the group assignments as noted in other posts. They are hell to get rid of people as failing one usually means failing all even when some didn't perform. The usual trick to get it right is to either fail the entire group and give them an individual assignment to filter out the incompetent or to do the same thing on an individual basis. The problem is that you actually have tangible evidence otherwise it is very difficult, though commit logs is a start. The people who should fail and get through are at least some of the 95% you mention.

The 5/100 ratio is not odd/off in my perspective. You describe the top 5-10% (which are good and you want at your company) vs the next 30-40% which are average and simply make things work but lack initiative or knowledge vs the rest which should look for another job since these applications usually contain at least some amount of incompatible applicants.

Consider that if you pay peanuts, you get monkeys so that also might be the problem.

ps: IT-managers without proper IT knowledge (and yes I mean the basics) are the same thing as a CEO who is unable to speak in public: Nice decorations on the office floor but utterly useless in practice.


I've had group assignments where group review had the ability to fail an individual member if they were a non-contributor. Every project I had with that professor had at least one person in my group who was a non-contributor (and was outed by the group in review, resulting in a failing grade).


Yes. I can imagine this to be common among all levels, but the interns I am guiding are mostly from HBO (between college and university level) and some MBO (college level). Did not expect it to be the case at university level as well.

I really do not understand why so many people want to get into IT if they are not really interested in it. Job security probably? Not for the money I assume as in the Netherlands the average salaries for even a senior programmer are pretty low. +/- EUR 20,81 an hour before taxes (+/- EUR 12 after taxes).

Sources:

https://www.loonwijzer.nl/home/salaris/salarischeck#/ (search for Application Programmer)

Anecdotal evidence of many programmers around me that I've met in the Netherlands.


RE the group assignments, my university quite successfully dealt with this by introducing a scaling factor based on peer assessments within the group. This scaling factor was strong enough to allow a group to fail one member if they received poor enough grades. Failing one member tends to boost your own grade as well, as the assumption is that you ended up having to pick up the slack.

I did three group assessments, and out of those three failed two students with the help of my teammates, by co-ordinating our peer assessment. This peer assessment does not become public until grades come out, at which point they can be disputed. But 3 vs 1 usually comes out in favour of the group.


> I did three group assessments, and out of those three failed two students with the help of my teammates, by co-ordinating our peer assessment. This peer assessment does not become public until grades come out, at which point they can be disputed. But 3 vs 1 usually comes out in favour of the group.

This is, inadvertently, fantastic training for sociopathic stack-ranked office politics.


> RE the group assignments, my university quite successfully dealt with this by introducing a scaling factor based on peer assessments within the group.

This is something my university has done, but didn't implement quite as well. I think the key here is making it completely anonymous (ideally filled out online individually) because otherwise objectively assessing a group member's performance in front of them is difficult.

With that said, I think I'd still find it hard to outright fail a group member via these means unless they literally put zero effort into the group project - it seems a small thing to threaten their entire degree over.


I failed a group member over this. They did put a small amount of effort in, but the problem was that the parts of the project they agreed to take on ended up having to be done or re-done at the last minute. If we had just known he was going to do nothing, we at least could have planned for it. In the end, putting in that small amount of effort actually hurt more than putting in none at all.


It may be a controversial opinion, but if a lazy student can explain a solution or an algorithm, it's not as bad. In the end, he/she understood the problem. Chances are he's a quick learner and will be effective at googling solutions off the net.


I really do not understand how so many people manage to fool almost everyone for so long.

To me, it's quite simple. Lack of teaching the fundamentals of programming. CS programs want to teach object orientation in first year CS. How can you understand object orientation and complex data structures until you've learned what a pointer or structure is (C code, yeah)? How can you understand a pointer unless you have a basic understanding of the machine, and in particular, how simple variables are stored in memory.

This is the boring stuff for professors and administrators of a CS program, but an absolute if you want to produce quality CS graduates.

Otherwise good students get lost in their first year of CS, which is the real shame here, given the shortage of CS talent. I advocate for better teaching of fundamentals, preceding more complex topics in computer science. Without it, we will continue to see CS grads fall down in the work force.


How can you understand object orientation and complex data structures until you've learned what a pointer or structure is (C code, yeah)?

Speaking as someone who has spent considerable time in front of first-year college CS classes, attended SIGCSE conferences, and applied this in practice: trivially. Your "pointers-first" bias has long, long been debunked. Brown, Stanford, VT, and other CS departments have had fantastic programs teaching via OO-first methods for going on twenty years now. Studies of students taught this way have determined that they learn stronger design skills earlier, and have a greater focus on the problem to be solved rather than the machine being built. (At first: this is emphasis only on which skills are taught first.) Anecdotally, giving students a framework in which to understand why they're doing what they're doing early is hugely important. Then, a few semesters in, they do the deep dive into the minutiae that C, assembly, etc. afford.

This, in turn, is really no different than the era of pre-C++ CS instruction. Prior to C++'s big wave of popularity in the 90's, C was largely considered to have too many sharp edges for first-year CS instruction. Languages like Pascal, Modula-2, and even some Lisps were preferred to get basic algorithmic and data structures concepts across, with C generally relegated to the third semester or so. I'd call the common theme here "algorithmics and abstraction first" pedagogy.

C++'s burgeoning industry appeal eventually became a siren-song for many CS departments, who dropped the "not C first" pedagogical approaches and threw students into C++ right from the start. The result was generally disastrous. C++ is a terrible teaching language, with all of the sharp edges of C, with more in the OO/templates part. For contrast, Brown's program of that era used OO Pascal with a custom-built graphics library designed to be accessible to first-years, yet useful through senior project work.


As someone that studied on Universities that prefer OOP first approach, I would disagree.

Design skills can be only learned through experience. Understanding what abstraction to apply is something you would learn after many "failed surgeries". You can only learn this on the job and by reading a lot of code.

Writing efficient software is impossible without understanding machine. Without knowing how things are represented in memory or how compilers/OS works you will never write good software. You will end people that do not understand how event loop in node.js work or write thread safe Java.

Just ask few practical question about Promises, Heap, GC, Threads and you will see blank faces from the same people that just white-boarded DFS.


I had a lecturer as a close friend in college. He was only a lecturer because he refused to do a dissertation to complete his doctorate. He was the best teacher in the department. So it goes.

Anyway, he would get up in front of the staff meetings and say, the school is graduating students who do not know how to code. The school didn't care; they got paid. Every class of any import had team projects because "that's what the real world is like". And every team would have the one person just skating by. The only exceptions in my entire undergrad education was in the graduate level course I took with a friend, which thankfully had two-person teams. And the one elective I took with the aforementioned lecturer, who actively kept tabs on who was doing real work through surveys and looking at the source repositories.


It isn't the University's job to teach people how to code. University isn't a vocational school. Yes, one should know how to code when they graduate. But the school is usually trying to teach the theory not the coding


What? Almost the entire meat of my CS program was programming projects in a variety of foundational domains (operating systems, networking, concurrency, databases, graphics, compilers, cryptography, machine learning, visualization, etc).

Theory (discrete math, algorithms, formal languages) was 3 courses out of an 18-course program. HN keeps talking like it's the only thing you do in a CS major, which is bizarre.


I think you misunderstand. After an introductory course, it's expected that you don't need to be "taught to program" anymore. As part of your compiler class, you'll be taught about automata, regular expressions, grammars. For your projects, you might get some help with using a lexer and a grammar generator, but you should be able to figure out the "programming" parts yourself, and the teaching emphasis is on the theory.


Sure, but the vast majority of lecture time prepared you to do the programming projects, the vast majority of out-of-class time was spent on the projects, and the exams required minimal study besides having done the projects (one way of spotting cheaters was to look at the delta between understanding demonstrated out of class vs. in class).


See it this way: Each subject you finished at university, did you come at the end thinking "I learnt how to program", or did you come out thinking "I learnt how to structure a graphics pipeline"/"I learnt the tradeoffs involved in building distributed systems"/"I learnt how to structure data to make consistent updates and retrieval simple and highly performant"?

The most obvious way you apply what you learnt in those classes is by writing programs that implement those ideas (And it's also the easiest way to evaluate whether you actually learnt the concepts). But you can just as easily apply that knowledge by designing systems for somebody else to implement — somebody who presumably lacks the knowledge required to architect the system. Hammering out code is easy, and you don't need a university degree to do it. It's the rest of it that takes some study.


Where I studied, our lecturers expected us to learn the fundamentals of whichever programming language on our own time. The vast majority of our courses were focused on theory of CS/physics/maths/engineering/etc. That might explain why our class size dropped from around 100 in first year to around 15 by 4th year.


We had 5 kinds of maths; algorithms and data structures, databases, computer architecture (with assembly), English lessons, history of mathematics, operating systems, networking, and 2 programming courses: C and Java. There was very little learning by doing.


I suspect there's a fair bit of variance between CS courses, and even how we remember it.

In my case, the topics were similar to the list you give (although there were a significant number of optional classes which I filled with math, skewing it a lot), but the subjects were definitely more theory-focused than practical. Sure, there were practical classes for each subject, weekly projects, semester projects and so on, but they were secondary to the theory in both quantity and difficulty. As an example, you might deep dive into a number of networking protocols, but in the same period only implement a toy TCP/IP during a prac.


Interesting, I guess mine skewed the opposite way: the theory served the projects, and the vast majority of what you needed to know for theory-based exams, you'd pick up by doing the projects. Some lecture slide and textbook review necessary, but not a lot.

Always surprised to hear about this variance, given the general consensus that there aren't differences in quality between schools.


I'm curious, where did you attend? I started a CS degree (at UC Irvine), then stopped because I felt it was too abstract. Maybe I bailed too early.


Let's keep it real, the cheaters aren't opting out of learning to code because they want to dedicate all that extra time to mastering the theory instead.


The real problem is, the idealised goal of a university should be "learn something", but it's actually "get a degree".


Another case of "Once you turn a metric into a goal, it ceases being a good metric".



It depends on the class, but often times it is the university's job to teach people how to code. For example, if you make it through an algorithms class without knowing how to actually implement the algorithms, that is probably ok, as long as you have a strong enough theoretical understanding of them. If you make it through a systems programming class without learning how to do any systems programming, then that is a problem.


I'd disagree on this. Is it important for the university to cover the theory (algorithms, compiler design/automata/languages/parsing, logic, discrete mathematics etc)? Absolutely.

Does that mean there isn't space for a module on software engineering theory (design patterns, development methodologies, OOP principles, testing, version control...)? I really don't see why that has to be the case. Some of the worst code I've seen has been written by academics - undocumented, undescriptive nonstandard single letter variable names everywhere, huge monolithic functions named "compute" that span hundreds of lines and several levels of indentation.


"It is not the medical university's job to teach people to practice medicine. It's not a vocational school. The school is usually trying to teach the theory, not the practice"

See how it doesn't make sense?


No, because a medical school is literally called a professional school.


I don't know why you are marked down for this


The lecturers I had at uni would give individual grades if you ratted out the slacker and they couldn't prove what they did.

Using git helps a lot!


A big tactic the fakers use is to paraphrase everything everyone else says. We had a guy in our team that literally paraphased everything said by anyone else in the stand up. E.g. you could say we're waiting on the dba team for a qa db (grr large multinationals and their asshat dbas), he would seconds later say something like 'i think whats currently delaying our progress is waiting on the dba team'. Last I heard he got big pay rises (as a coder at the time with a net negative contribution) and eventually entered management, my friend that was a permie on the team got a insulting 500gbp annual increase and a bad performance review, he wasnt a faker, he quit.

Fakers, cheaters, bullshiters are everywhere unfortunately, spot them early and avoid them where possible, preempt their attempts to land grab credit


"Fakers, cheaters, bullshiters are everywhere unfortunately"

We are. Source: I am a master-level bullshitter. Professional lobbyists and politicians ask me to make introductions, use my network, etc. Here's the weird thing, though: I'm a software developer. A pretty good one. Not the best, but top 20%.

I know tons of Natural Bullshitters like myself, but I've never met another one that was a software engineer. Dabblers, occasionally, but no Naturals.

The thing is, it takes a bullshitter to spot a bullshitter. So being a good bullshitter is the skill that has offered the most utility to the various jobs I've had in tech: calling out frauds, inflating the profiles of the people on my teams doing actual work, undermining lesser bullshitters that try to take credit from them... come to think of it, there's never been a technical team I was on where I was not the most popular member of the team (among my peers, at least).

So if you can find a Daywalker, try adding them to your team...


I can kind of relate to that.

I wouldn't call myself a professional bullshitter, and I wouldn't still have a business after ten years if I didn't have substantive knowledge of what I do.

However, my cultural and family background is in the humanities. I spent most of my adolescence and teenage years on my programming hobby, like many people here I think, but the route to a tech career was a rather incidental one. My parents are philosophy professors, and I entered university with a political science major (switched to philosophy), before dropping out two and a half years later to pursue what was by then an ascendant tech career.

The discord between my background and the psychology of typical software engineers and business guys has led to amusing, sometimes infuriating cultural clashes over the years.

Nevertheless, I've got the comment more than a handful of times over the years that I really should have been a writer or do something involving lots of speaking. You can probably tell from this comment alone that I'm not a person of few words. As I get older, this has seen me enlisted to assist with marketing copy, résumé polishing, LinkedIn profile formulation, and a variety of other tasks that require putting a spin on things and drawing on one's facility of rhetoric. Some have said I'm a lawyer on the inside, combining the logical, left-brained, procedural qualities of a programmer with a knack for wordsmithing eloquent pleadings.

It's a weird space to dwell in. The real nerds think I'm a bullshitter half the time. The real bullshitters think I'm a nerd who lacks the social jujitsu to really hobnob with Visionaries like themselves. Beats me.


I dont underestimate the benefit of soft skills and also benefit masters of org/office politics can lend to the rest of the team, especially when your one step away from a contract renewal. Ive had great PMs before that add the benefits you mention. But occasionally you get a toxic bullshiter in the coding space, dabblers ranked as technical peers, out for themselves and somehow pulls the wool over management, asshats.


> Every time you get dragged through some pointless, patronizing and degrading interview process that assumes that you don't know your stuff - you can thank the cheaters and fakers, and the fact that our industry doesn't have the will to root them out.

Mostly the latter, though it's us (practitioners) not the industry (employers). We're a visible, highly paid field of work without professional body either settingnquality standards for schools or certifyiing minimum competency of individual practitioners.


I agree with you having a centralized regulatory body set standards would help, but our business is highly resistant to this even if all programmers wanted it.

What should be regulated that makes sense in both Javascript and C? How should front-ends be treated compared to Assembly code? What best practices go where, and who decides this?

A ton of people have tried to certification programs and they generally fail to get real traction because every project is different from last year's project. I think this is because the field is advancing very quickly. Regulation needs a slower pace to keep up.


I have never interviewed/hired anyone, but the more I work as a developer the more I feel challenged to do it and invent techniques to detect liars.

One thing with science on the side of truth is that the simplest questions (such as "what's the difference between a boolean and an integer?") can have extremely complicated and deep answers that lead to crazy discussions about formal definitions or implementations on different platforms and etc. And these are the kinds of things you can only learn by actually learning them, and trying to bullshit their way through an answer will quickly paint the faker in a bad place.


The truth is, you don't need anything elaborate to expose a fraudster.

Give a very simple programmatic task, a paper (or a blackboard) and a pencil. As you watch the interviewee elaborate his solution, and as you discuss it with them, you'll immediately see if programs are a deeply foreign object to their mind, or something they're fluent with.

The point is not syntax, forgotten semicolons, or even knowledge of libraries. Don't bother with design patterns either, the kind of tasks we're talking about is too simplistic for them to be meaningful: just watch whether translating a task into program snippets comes naturally to them. If they can do that, they can learn, and probably have learnt a great bit already.


There is actually a lot of research into how to conduct interviews for useful answers. They all come down to you need a deep discussion.

What works well is some form of "tell me about a time when ...". The candidate gets to tell a story of their choosing, but the discussion is long enough that they can't keep track of their lies...


> you can thank the cheaters and fakers, and the fact that our industry doesn't have the will to root them out.

Isn't this in every industry? You don't apply for the job you are most qualified for, you shoot for the next level job, the job that pays better, looks better, but stretches your expertise.


  Isn't this in every industry?
Many professions have professional institutions which act as gatekeepers and audit the quality of university courses. For example, in the UK to become an architect you must become a member of the Royal Institute of British Architects, which requires a degree, a certain amount of postgraduate training, and a certain amount of experience working under an already-qualified architect.

Needless to say, this means you can't drop out of university to found an architecture startup! And these organisations often have the side effect of influencing wages (and industry reforms) due to their control on the supply of workers.

People on programmer forums don't seem to like the idea of this sort of thing much. Better for my ego to believe I'm a unique genius due to my own efforts and inherent superiority; than to believe I'm a replaceable cog stamped out by a university production line!


And all those gatekeepers introduce significant economic inefficiencies while often failing to control adequately for quality.

Part of the problem is that programming is far too diverse a field. The only common denominator that could be reasonably certified by a gatekeeper/professional guild would consist of theoretical grounding in software engineering and computing concepts that typically comes with a CS degree. Yet, as all programmers are aware, people who have managed to jump through that hoop do not necessarily programming skills possess and do not necessarily good programmers make, while there are plenty of self-educated types who have managed in lieu of the formalities.

That's not to say that the field might not benefit from some professionalisation, just that there is an argument to be made that it is qualitatively different from civil engineering, medicine, aviation, architecture, accountancy and the like, if for no other reason than that the technology and skill requirements shift at a bewildering clip in rolling releases across 3-5 year time frames. That sort of thing is anathema to the slow, bureaucratic pace of guilding or regulation.


I don't see this as such a problem.

For instance, calculus, linear algebra, differential equations, numerical analysis, and so forth, are only a small part of what makes a good actuary. But they have a formal exam that you need to pass in these areas in order to be an actuary. They don't quiz new candidates with 10+ years experience on integration by parts during a job interview.

Because we lack even a basic exam in software development, we have to do data structures and algorithms over and over in interviews. I'd be more than OK with setting up a basic exam for what we can cover. I'd much rather take these exams with some assurances that it will be graded consistently and fairly, cover the proper material, and will lead to a lasting credential respected in industry. I wouldn't mind this even if it involved substantial effort and studying on my part, because I'd know what I was studying for and why.

Hey, maybe it means my next software job interview won't have to involve weeks of studying followed by 5+ hours solving tree traversal problems at a whiteboard.


That's a very fair argument.

I still think there'd be far too much contention over just what is to be included in the corpus of essential knowledge. Everyone seems to agree that data structures, algorithms and systems fundamentals seem appealing, but just which ones depends greatly on whether you're doing embedded PLCs for fuel injectors or whether you're doing React.

I've been programming since I was 9, for most of that time in C. Yet I'd be very hard-pressed to tell you how exactly CPU instructions work, and have never touched anything "closer to the metal" than C. Should this disqualify me from the profession?


Almost nobody I know would say it should disqualify you from the profession, but I can see why you're nervous about this.

After all, law now generally requires a $150k+ degree and three years of your life. Plenty of people, including Obama, believe that it should be at least shortened. But once entrenched, it's very tempting for gatekeepers to install expensive and lengthy credentialing procedures.

So overall, I think there's a good way to do this, and I do think that software development could be a better profession with some credentialing, provided it was done sparingly and properly (a huge "if", I must admit). I'd like to see this happen along the lines of the actuarial field, with exams rather than required degrees, largely because I think people should be free to acquire the knowledge at a pace that suits them. If you want to major in math, or physics, or something else, and "read for" the exams (to quote an old law practice), you should be allowed to. If we had something like this, it might benefit our field immensely. It would be a huge improvement over the capricious, opaque, and arbitrary informal exam taking we do over and over every time we interview.

But... I can't possibly pretend I don't understand your concerns. I personally was a math major and I got a MS in Industrial Engineering. I never did major in CS, I certainly didn't study "Software Engineering." I am also willing to openly question the value of test-driven development. There are plenty of people who think one or the other or both should disqualify me from the profession, and the idea of empowering them with the force of law (ie., the power to fine or imprison me for developing without the background and methodology they've decided is acceptable) is pretty chilling. That would be much worse than what we have right now.


> Yet I'd be very hard-pressed to tell you how exactly CPU instructions work, and have never touched anything "closer to the metal" than C. Should this disqualify me from the profession?

No - it shouldn't. I feel that you should want to know this level and more about the internals (and history) of computers. I'm biased in that has been a driver for me personally...

...but to give you a hint:

A CPU (in the most basic sense) is nothing more than a very fast and very complex player piano, regulated by a clock. At each "tick" of that clock a row of "holes" (bits) representing a variety of information (address, data, instruction) is presented to the CPU. These bits are actually voltage levels. The CPU uses these voltage levels to change the state of logic gates in the system (which are wired for a variety of different tasks - but you only need one particular kind of gate to make any others - a NAND or NOR), which determine what happens, what to do next, etc.

I won't go further - and let me be clear that what I wrote above is nowhere near complete, and only describes the most basic level of an "ideal" CPU (or ALU or whatever) - because in the real world, you have things like voltage transients, timing issues (signals not arriving at the same times - hence things like wait states and such), noise, plus things like triggering on rising or falling edges of clock signals, multiple clocks staggered in time (for multiple triggerings per "cycle"), and a whole host of other things.

Yeah - it's pretty complex. But at the heart of it, it is pretty simple - a player piano, more or less, with a music reel spinning at mind-boggling speeds. If you need or want more information, there are (like everything else) plenty of resources on the internet about how all this stuff works. Check out some of the sites on people who have built computers from relays or logic gates; they typically have a lot of good information about the lower level stuff. Also look into some of the old books on building computers, and how they work (I'm talking the practical computing books from the 1950s and 1960s - difficult to find at times, check on used books online, in stores, plus on google books and archive.org; also, old back issues of Byte Magazine from the 70s and 80s are a good resource).


> Better for my ego to believe I'm a unique genius due to my own efforts and inherent superiority; than to believe I'm a replaceable cog

Other than your own projection going on in the first part, I'm not sure what you're actually saying. The fact there are not gatekeepers makes us more replaceable - some guy can take a bootcamp, work on some projects, and bam, he's competition. Programmers are acutely aware of this, and are constantly leveling up as a consequence. (And on that note, if there were gatekeepers, we'd probably be a lot more stagnant as an industry).

Also, I mean, common sense, it's constantly changing what tech and abilities companies require, so how is a gatekeeper system even going to work. I've gone from Rails to JS to TensorFlow, professionally, all in the last 3 years.


I can't understand the mindset of someone who would go to all these lengths just to be employed at something they don't understand at even the most basic level. Those same people skills could make them a lot of money in sales without all the grief.


OK, but real success in sales takes enormous talent, skills, and experience.

There are many different job-roles out there and it is hard to find out what these roles even are. It takes time for people, especially freshly graduated students and career changers, to find their niche. At some point almost everyone gets themselves into situations where they're not competent and then realize they need to either make a switch or get focused.

Rather than viewing such career-exploration failures as "fraud", as some are apt to do here, it is better for employers to take a more active role in developing talent and recognize that early career folks are going to be hit-or-miss.


Unless the employer has invested a large amount of money on a salary for someone who is not skilled in their supposed area of expertise. In my experience, developers tend to start their careers making more money than salespeople starting their careers.


I suppose sales is already overcrowded with those people, so some of them expand to "greener pastures".


Yeah, no doubt that's a huge part of why we have these interview-as-exam experiences in software development.

Still... the exams are harder than you'd need to simply weed out the fakers and cheaters. I can do various linked list and tree operations, set permutations, and so forth, and it's not like I've memorized it. I can apply these things. But the questions I've been asked in interviews... solving that, at the whiteboard, in less than an hour, all I can say is the companies are setting a much higher bar than merely ensuring that someone knows his or her stuff. That's fine, but let's make sure we don't pretend this is all about rooting out people who can't do fizz buzz. You can be pretty good at this stuff and still fail pretty badly at these interviews.


I think employers have a degree of responsibility here. Ultimately, they're the ones who've taken projects that ought to be within the capabilities of single individual hackers and said "this needs a team of six and a project manager". Thus transforming the project from principally technical (or tech + requirements gathering) into one of those highly-political group work exercises.


Employers don't need to take responsibility. All they need to do is to think about income. If they can bill the customer for an employee's "work" it doesn't matter to them whether the employee can actually do anything or not. This is common practice in the IT industry.


I don't know about companies, but I have a tendency to not pay a contractor's bill if he didn't deliver the work as agreed.


Yes, we all get that you don't get that it's easy for larger companies to mix in workforce that either does nothing or does not even exist. This is rampant in the construction and IT businesses.


>Every time you get dragged through some pointless, patronizing and degrading interview process that assumes that you don't know your stuff - you can thank the cheaters and fakers, and the fact that our industry doesn't have the will to root them out.

Not only that, but it devalues and dilutes your own degree, which is significantly more damaging monetarily.


Not sure that PE/CE status will help my Father commented when going for his IEEE chartered status in the UK that its not what you know but who you know.


People have no idea how many times in a interview (especially with big corps) I end up talking with a PM and he tells me "Yeah, I started here already as Project Architect after my cs degree. I really don't like coding or anything related so I never do it, I focus on architecture and guide the others".

And then I mentally run away.


This is a fantastic was to end up with a code base 15 layers deep with each layer converting the properties of one object to properties on another object. It sure looks pretty in architecture diagrams though.


Sounds like the "n-tiered" .NET MVC code I work with that just has "n-tiers" over stored procedure calls.


Very familiar with me also, but with by far the largest tier being the javascript right at the front end


As in code, so in organisations: somewhere in the stack some entity must actually do some work, or the whole will not function.


Divide and conquer, or in this case, divide and delegate. Recourse until the call reaches the base case with no authority to delegate, then the work will be performed and returned.


What companies let fresh CS grads start as Project Architect?!


They normally start as a project analyst, product management assistant etc


What company has product management people work on architecture? Usually they focus on what the product should do, not how it should do it.


It doesn't matter even for smaller corporations: if you can bill the PMs work from customer, he's just as qualified as the guy with top grades from top university.


The solution is effective hiring. Work sample or project based hiring methods get people to demonstrate their skill-sets. I once had a candidate refer to resume/conversation based hiring as 'ritualized lying'. I think that's bang on.

This isn't directly in response to your comment, but...

There's an elitist under(over?)-tone within HackerNews that asserts any less-than-engineer technical role is inferior and attracting of undesirable character. There's a particular disdain for project managers, often for good reason. Few things are more aggravating than a poor non-technical PM. But, in many organizations they're essential and valuable members of a team. Skilled people can have an immense degree of technical talent and no interest in programming or engineering culture.


The whole "effective hiring" answer doesn't scale. I am absolutely appalled at the idea that I would have to do a 'fizzbuzz' test for a purportedly senior developer listing 'strong C/C++ skills', for example. However, that's the reality we face, and part of that reality comes from having people be comfortable with lying their way into our "profession" and us not be completely aghast when it happens.

A fake doctor or accountant is a scandal. A fake programmer is business as usual.

On the elitist over/under-tone stuff: I should have been clearer. I think PM roles, documentation roles, testing roles, etc are all extremely important. However, I don't think these are roles where you should stash incompetent and dishonest pseudo-developers; I think the odds are pretty good that someone who fakes their way through being a developer isn't going to be that good at anything else.


> A fake doctor or accountant is a scandal. A fake programmer is business as usual.

I generally agree with your points, but people don't die or go to jail because of bad code. With a few exceptions. And in those exceptions there are additional checks (checklists upon checklists, rigorous testing gauntlets) to prevent buggy code from causing safety issues.

Technically underqualified people typically result in missed deadlines and/or poor project velocity. That can be catastrophic for a business model, but it doesn't directly result in any kids growing up without a parent.


> After reviewing Toyota’s software engineering process and the source code for the 2005 Toyota Camry, both concluded that the system was defective and dangerous, riddled with bugs and gaps in its failsafes that led to the root cause of the crash.

> Bookout and Schwarz v. Toyota emanated from a September 2007 UA event that caused a fatal crash. Jean Bookout and her friend and passenger Barbara Schwarz were exiting Interstate Highway 69 in Oklahoma, when she lost throttle control of her 2005 Camry. When the service brakes would not stop her speeding sedan, she threw the parking brake, leaving a 150-foot skid mark from right rear tire, and a 25-foot skid mark from the left. The Camry, however, continued speeding down the ramp and across the road at the bottom, crashing into an embankment. Schwarz died of her injuries; Bookout spent five months recovering from head and back injuries.

It's not just spacecraft that kill people due to crappy code. Even something as mundane as a _Camry_ can be fatal if the software team isn't diligent about their work.

http://www.safetyresearch.net/blog/articles/toyota-unintende...


I was thinking of car control systems, among other things, when I mentioned "exceptions". If this sort of catastrophe happens, there are bigger issues than technically underqualified people making it past technical interviews.

And I wouldn't call safety critical software that can kill people "mundane". It's a minority of code out there and it tends to be really thoroughly vetted, in my experience. I can't comment on Toyota's quality assurance, though.


As a foil to your claim that people don't die from bad code:

In 1999, when the US was engaging in a bombing campaign in Serbia against "military targets" that were staffed by civilians, the US bombed the Chinese Embassy, killing several and injuring dozens of civilians.

The US later "apologizied" by saying "sorry, we programmed the coordinates improperly."

Now, I'm not sure if that means some CIA intern college kid task monkey supplied the wrong coordinates to the bomb programmer, or if the bomb programmer typed the (correct) coordinates improperly.

But, software absolutely does kill people when wielded by lazy, incompetent, and often, diploma-holding programmers.

https://en.m.wikipedia.org/wiki/United_States_bombing_of_the...


One of the problems with GPS designators was when you changed batteries it reset to your current location which has lead to more than one blue on blue


Wow, that's brutal. I wonder if those American lives are considered collateral damage too to those pushing for war


Yes it was a bug in the embedded software as I recall - the testing did not make it squaddie proof.


Pretty sure operational missile targeting isn't done by programmers.


Do you think a typographer can enter coordinates into a system from the late 80s?

The pilots themselves can't even enter the coordinates. These planes were programmed by the CIA.


I believe they are entered by authorized service officers using methods typical in other industrial data entry applications (e.g. serial console). Entering waypoints and target coordinates has nothing to do with computer programming. Just like when you bum an extra trailing zero in your Excel spreadhseet it's not a programming error.


While I can agree with you that entering waypoints is not quite analogous to software engineering, I think we are arguing over semantics.

Especially in 1999 and especially with legacy systems, the word "programming" means "providing computers with coded instructions for the automatic performance of a task".

This is similar to how a television network would "program" a list of shows to occur in order automatically.

I suppose my overriding point here is that there are credentialed people with smart sounding degrees whose maximum extent of interacting with computers is typing in instructions.

And therefore, we should be extremely wary when we use credentialing as the litmus test for "should they enter bomb coordinates that will be flying over civilian heads"


> The solution is effective hiring.

Meh. Hiring is expecting other people to train your employees for you.

It's important to hire smart, driven, forthright people, but if you really want them to, say, write readable code, assign them to read Clean Code, make code clarity part of the regular performance review, add it to the code review checklist, etc.

In my opinion, bigger than hiring practices is a lack of communication up and down the chain of command about the importance of good code, who is writing it, and what are the carrots and sticks involved in making sure good code happens. Most people able to hand out raises and job titles work off secondhand and hand-wavy information about technical prowess at best:

"Yeah, he's really good. Fixes lots of bugs and get asked for help a lot."

Of course, that could be because the code's a big mess and he wrote himself some excellent job security. A non-technical PM would know little about this very common issue.


TBQH, Management does not care about clean code so much as getting a project out by the deadline however unrealistic it may be. Clean code is very low on their list of priorities. Even among managers who used to be engineers it's very low on the list of priorities. They say they care about it. That's not the reality though.


It depends on the company. One place I worked at instituted a bonus system with the result that programs were released on time, buggy or not, as releasing them late would have reduced the managers' bonuses. I raised a stink about that by repeatedly saying "We really must inspect the code before the product ships." I was sidelined and let go early when the company shrank to a shadow of its former self, infested by people who were more interested in furthering their career than doing the job.

The next company I worked at was very serious about quality. When I started working there, I was told that I was allowed one bug in my code. If there were a second one, I would be fired. I'm not sure if that was true as my code didn't have any (found) bugs. It was thoroughly inspected by two senior engineers before I could even run it on the test bench. We were some months behind schedule but there was not one word about reducing the quality to catch up.

The first company made communications equipment, the second one an air traffic control system.


I can understand a strong emphasis on quality for an air traffic control system and am glad that there was more weight put on getting out quality code than meeting a deadline (especially with some companies interpretations of DevOps/CI/CD). I'm curious what sort of defect penetration did you get once you got to the test bench?

I've had lines of code go through walk-throughs, inspections, paired programming, the works, and still end up failing somewhere down the line in integration testing.


Is the work sample done in person or take home? If it's take home, then they can just cheat on that by hiring someone else to do it.


At Compose (YC, S11) we had a two part hiring method. The first was a blind work-sample. You had unlimited time to complete it. Identifying information was obfuscated and graders would yes/no objective criteria. The best samples proceeded to the "work day".

During the work day, you'd join is in Slack for 8 hours. You'd need to ask probing questions to mock-up our infrastructure and pitch a feature based on "how we work".

One could try to cheat on the first part, but there is no chance one could have bamboozled us during the work day.

For our technical writing positions we had some instances of outright plagiarism, but no instances of anyone trying to cheat the coding sample.


> During the work day, you'd join is in Slack for 8 hours.

That's amazing! For everybody else that shares my amazement, I found https://www.compose.com/articles/how-compose-uses-interviewe... that touches on said hiring practice, but I'd love to hear more.


Just out of curiosity, how many people applied vs. made it through the blind work-sample vs. the work day? I think this is a great system, but I'm not sure I could keep a team of engineers actively engaged in answering infrastructure questions while at the same time trying to get their own work done. Was that an issue?


Are you still working at Compose? I'm curious how the IBM mandate of all remote workers going back to an office every day has affected Compose.


How much did you pay people for the two days worth of work?


The trick is to then talk through the code with them in the next interview.

"Why did you do this part this way? Would you do it differently if requirement X were different?"

It's immediately obvious if they cheated. Better yet, that's a much better, more relevant discussion than most whiteboard problems.


And most of these people can't code fizzbuzz... Which is why it is so important to ask a question line that. Most of these people will also fail a whiteboard interview spectacularly.


The thing About FizBuzz is that, if you Google programmer interview questions, it's the first thing that shows up and takes about 2 minutes to wrap your head around.

Somebody who can't do Fizzbuzz in an interview has literally done zero preparation for that interview.


But even then, you'd be surprised how many people don't even do that.....

Even so, I usually ask something that is barely more difficult, like traverse a tree, or reverse a sentence.

There is no point at all in asking the hardcore, Google interview questions, as a simple question tells you just as much, which is "do you have a pulse".


I got asked compiler question once. How to find dominators. I haven't had to find dominators for a very long time other than from a college course, mostly only used them. Looking it up, it's definitely not something you can come up with because it's so clever.


Those questions tell you more about the interviewer than the interviewee.


Was this interview for a compiler position?

Anyway, dominators make sense for arbitrary directed graphs (with a notion of root).


yea, but for a target dependent backend position.


If a candidate can google the problem, understand it, and then learn how to write a simple variation of it they have demonstrated all the skills necessary to be a useful coder at an awful lot of companies. I'd kinda prefer them to learn basic programming a little more ahead of time, but I honestly don't care when you learn how to program.


Wasn't it also the University of Sydney that recently made graduates take their tests in English and checked their ID before taking the test? I think some students sued after a 90% failure rate.

So many degrees are as valuable as toilet paper.


Is it not standard to have your ID checked when taking exams? I had to take my university ID with me to all that I sat at a British uni.


What language were they asking Australians to take their exams in before?


I may be confusing multiple issue, I've heard there are tonnes of Chinese tutors and worked with graduates that couldn't write/speak English to save themselves. I'm not sure if that's allowed on tests.

I can't remember the exact change they made for the testing though, it was in the news a couple of years ago.


Not sure I get your point. I don't know Australian culture, but wouldn't English be appropriate?


We have a very multicultural population with many recent migrants. English is our language, but many of our residents and citizens haven't been speaking it all that long.

Our universities also have a high proportion of international students. Predominantly from China, Korea and elsewhere in SE Asia. These students pay much higher fees than Australian citizens. They're used to subsidise the local education market. Classes are conducted in English, but allowances are often made for assessments.


at 90% failure rate, that would make degree so much more valuable


Only to those people that got them after IDs were being checked.


quite often the plan in the mind of these folks is that they will try to fake their way through and be moved up into project management

Wait a minute, are you saying all I have to do to be promoted to Project Management is to code really badly?

Hmm. I wonder if C# has GOTOs...


Even better! It has exceptions! You can write code that deliberately throws an exception where the handler calls a function. It's more like come-froms than GOTOs.

Please don't do this.


This is a somewhat common (possibly even 'Pythonic') idiom in Python.


Please don't do this.

Don't hate the player, hate the game.


I guess it's pretty common. One of my old coworkers was promoted to a mostly management position, this guy couldn't figure out how to print something out in Python without calling a colleague.


>Hmm. I wonder if C# has GOTOs...

Why yes, yes it does.

You can also alphabetically sort a list of 30 things by adding those things to a list line-by-line, sorting them in alphabetical order yourself.


You can also alphabetically sort a list of 30 things by adding those things to a list line-by-line, sorting them in alphabetical order yourself.

I see you've worked in enterprise too!


That happens because we fetishize specific educational goals for those secondary roles and filter out broad swaths of the educated population for no reason. You have to have a technical degree to not be deleted from the pile.

We should be hiring English and Sociology majors to be PMs and marketing people.


> The idea that "oh, they'll get caught in the workforce" is not correct - quite often the plan in the mind of these folks is that they will try to fake their way through and be moved up into project management, technical marketing, etc. and never have their lack of these specific skills exposed again.

Even if they don't follow the Dilbert Principle path, it's not like there's a universal catalog of how skilled any given employee is; someone who can bullshit their way into a job and can last there for six months is still employable after they're let go when it's discovered that they don't know what a for loop is. Not everyone checks references.


Wow, I was a student there at the same time.

Cheating was rampant. The submittal box for assignments had a big enough slot that smaller arms/hands could reach inside. I saw people do that: grab someone else's assignment, copy it and put both back in the box. The school didn't seem to care.


Misogynist post ahead. You can make yourself feel good by downvoting this.

In Poland where I made my CS degree, girls studying CS are very few. As a result, girl students get a lot more (sexually motivated) attention than on other, less technical subjects. Long story short guys were falling over themselves to help girls pass their programming assignment, do their homeworks for them etc. (Some) girls were very happy to take advantage of it. I wouldn't be surprised if they had guys helping them cheat on exams.

Supply and demand, plus sexual attraction are a part of the problem.


As others have noted, this is unlikely to explain it.

I'm a professor at Carnegie Mellon. In the most recent semester, I taught a large undergraduate intro to programming course (CMU 15-112 is similar in audience to CS50 at Harvard. Our cheating ratio - about 10% - was similar this year, and was also higher than normal). The course is taken by almost all first-year students at Carnegie Mellon - it has about a 50/50 gender ratio. Out of curiosity, I examined the stats for our cheating cases - briefly, so I may have made some errors. But males accounted for about 2/3rds of the cheating cases. It's not clear that that is statistically significant, or if it boils down to "about the same."

Unlike your hypothesis, this is based upon the actual cheating statistics from a class of about 450 people. :-)

Some time over drinks, remind me to tell you the story of the guy who asked his g/f to go get him a snack from the vending machine, stuck a USB drive in her computer, and stole her homework to hand in as his own.

Cheating happens across genders and nationalities. We look hard to identify trends, because we try hard to find ways to intervene before the cheating happens. Most cheating comes from poorly-managed stress. A smaller, regrettable subset comes from people who actively seek to cheat their way through something -- but most of it is simply a bad decision under time pressure.

The NYT article's hypothesis that people are putting more hopes on CS classes for their future careers rings true to me. We see a lot of people who are (a) not doing very well in the class; and (b) (probably mistakenly) believe they need an A or a B in the class in order to succeed in <thing they want to do>. This creates a stressful situation for them, and some respond in a way that bites them in the long run: they cheat. And, because this is CS and we have MOSS and other tools, they get caught.


I'm not pretending to explain why all cheating occurs, or who cheats more. Only what makes the problem worse. In any case I don't know enough woman programmers to make a statistically significant argument. I just described what I encountered while earning my own degree.

It could be that those girls just had a cheaty nature and would cheat one way or another. On the other hand, there's a saying "Opportunity makes a thief". It's why during a war looters and marauders become a problem. In other words, some people cheat because an opportunity presents itself. Some horny guys essentially offer to cheat for you, and you have to actively resist.


If you're not pretending to explain why all cheating occurs, and we're talking about all cheating based on the article, it really just makes you sound like you have a bone to pick with women. When confronted with the statistics that someone else is seeing more MEN cheat, even with this supposed "situation" you describe, you don't react by thinking "Hmm maybe my anecdotal idea is wrong" but by explaining how it's still relevant. You seem strongly biased here.


You are welcome to spin strawman arguments like the "statistics" above that men cheat as much or more as women. But don't expect me trying to prove what wasn't my point in the first place. That's misrepresenting my original post and I won't fall for that.

And I'm not even saying "my anecdotal idea is wrong", because it is not. I've seen it happen that way, it doesn't matter if scale is smaller or larger.

I just highlighted another motivation / mechanic of cheating. And it happens the other way too, with men manipulating women. In my first job (not programming) there was a handsome guy pretty adept at this. He would say things like "(Stretching) It would be so nice if someone opened the window." --- 3 girls rush to open the window before they realize they're being manipulated. I wanted to add to the full picture of why people cheat.

But I'll show you how my original post in this thread can be generalized:

Some people cheat not because of peer pressure, time constraints, inability to understand, "because everyone does", collaborative ethos among computer programmers... but because other people spoil them, and offer to "help" them in exchange for perceived favors. It doesn't even have to be a sexual attraction thing, it can be "helping this guy because his father owns half the town". If you learn you have some perk that really makes other want do favors for you, you're tempted to abuse it more. Sometimes being good at flattery is enough!

Another explanation of this phenomenon is: people are more likely to cheat if opportunities are very easily available. But that facet is already mentioned in the article.


Thats fine, they are likely to receive the same benefits on the jobs as well. Who's to say its not a team building skill??


But that way they learn manipulating people instead of programming. The point of a programming assignment or exam is to force people understand programming. If you're hiring programmers you likely want "Masters Degree in IT" to mean "this person can code", not "this person can manipulate people".

Good for them, not necessarily good for other people and the company in general. And in my experience romantic or erotic relationships in a workplace is a very bad idea. It's probably this reason that a woman on a ship used to be associated with bad luck - in an all-male crew people are likely to fight for her favor, instead of cooperating. Note I'm not saying women shouldn't perform traditionally male jobs, I'm point out that an unhealthy proportion of men to women can encourage extra cheating.


learn manipulating people

This is called management, and it's well paid.


In that case, they should brag about it: "I never finished any programming assignments, because I managed to make other people do it for me."


Im pretty sure they do, just not to the people that can ostracize them for it.


Good insight, that would explain the mechanism propelling a fractionally small minority of cheaters at best.


I've been a TA for a couple CS classes at Iowa State University. Cheating here at least is pretty rampant. I would guess 1/3 or more of the students cheat in some capacity.

Some of it is really easy to detect too. For a first semester freshman class, I tried using MinHash to cluster student submissions together. Maybe 10% of the assignment have near duplicate pieces of code. More comprehensive methods like MOSS (Measure of Software Similarity) find much more.

It's hard for professors to take time to deal with it. A professor more or less told me that the process to report it to the dean of students would take months of his time, and that he already spent 55 hours a week for his classes. He explained that the time it would take to go after the cheaters would detract from time to make good assignments, labs, and lectures for the students.

It's hard to see a professor for a large class (500 people) having the time to deal with handling cheaters so the problem continues. I'm curious how common this is as a problem at other schools.


In germany, what you call cheating isnt even considered cheating.

On homework, we openly collaborated and handed in near duplicates without trying to hide it. Some students would just copy the work of others. None of this mattered. Nobody "went after" it. It just gets graded, you get your points and you move on.

Most people accept the fact that grades don't matter, but you need them for your resume. For some reason, universities have moved on from being institutions of knowledge-production to being diploma mills, and grades are their way of succumbing to societies need to make shit measurable. The way we cope with this is to just hand out grades that are utterly meaningless, but keep the system alive.

In the meantime, people carry on with their lives. Some people in university want to learn things, some are just enrolled because its a cheap livestyle choice.

In the end, nobody cares about grades. Whether I actually know my stuff is my problem. Not my professors. No company hires you based on your grades, but a lot of HR personnel will reject you based on them. So you need good grades. But good grades alone wont get you shit. We live in this weird limbo of grades being a necessity but beyond that, dont mean shit. They are the shittiest heuristic known to man.

If grades were a scientific theory, those who came up with them would be ostracized crackpots. Neither meaningful nor in any way falsifiable.

A written exam on quantum field theory can only test so many things. Since its written, you have to ask for things that can be produced in a given timeframe. That leaves you with memoizable calculations. Any professor will tell you that such an exam will tell you nothing about whether the student actually understands the subject matter. A 2 minute discussion of the topic in very high level terms is probably more useful to assess actual aptitude.

Professors KNOW that they cant possibly measure the ability of individuals in classes of more than 15 students. Why would they care about cheating? They are part of a system that doesnt make sense just as much as everyone else.


> In germany, what you call cheating isnt even considered cheating.

Having taught computer science and HCI courses for ten years at German (Bavarian) universities, I can not agree.

Of course, some professors/groups are more interested in finding and addressing plagiarism than others. However, I have only once encountered a professor who didn't really care about a plagiarised thesis.

In general, addressing cheating is much easier in Germany, as professors and teaching assistants usually may (and have to) take appropriate measures themselves without involving deans and examination offices.

Typically, teaching assistants will check submitted assignments both manually and automatically (usually using MOSS or JPlag, sometimes custom scripts). If plagiarism is found, affected students are called in to a meeting with the teaching assistant to present their side. Unless students can present evidence that they are not at fault, they are given a stern talking and a failing grade. If students do not accept the outcome, they can ask for an appointment with the professor (very few do this). If the professor upholds the decision (which they ususally do), students may escalate the case to the dean. This is done very rarely (I know of only one case).

In my experience, staff at German universities definitely cares about countering cheating. We will certainly never catch every cheater.

However, I would even argue that addressing cheating is easier in Germany, as the process is generally more light-weight than at Anglo-American universities. As students face fewer consequences for admitting cheating (just failing the course, not getting expelled), they usually accept these consequences and try not to get caught again.


I'm not talking about Theses. I'm talking about "übungszettel".

I'm aware of the fact that bavaria takes a bit of a harsher stance on this.

In Heidelberg, nobody gives even the slightest crap about "cheating". But as I said, it isn't considered cheating anyway. Students are expected to collaborate on these things.


> I'm not talking about Theses. I'm talking about "übungszettel".

Sorry for the confusion. While I mentioned theses in passing, my statements were about programming assignments in undergraduate courses where students were expected to turn in individual, original work.

> I'm aware of the fact that bavaria takes a bit of a harsher stance on this.

I'm actually not aware of this fact. In my experience, how cheating is addressed depends much more on the stance of the individual professor than on that of the university or federal state. Extrapolating from your experience at one department within one university to all universities in Germany seems audacious.


Would you be alarmed if you found out your doctor had entirely cheated their way through medical school and knew very little medicine?

Great, you figured out that grades aren't a perfect metric. But so what? Grades don't prove a student is great, but they can eliminate a lot of students that are bad. If the bad students just cheat their way through, everyone else down the line suffers for it.


- if you actually read what i wrote, youd have figured out, on your own, that i didnt say that you should cheat in order to learn less, but to leave grades behind in favor of actually acquiring knowledge.

- before a doctor is even allowed to practice medicine, he has to go through 5 years of education followed by 3 years of apprenticeship, followed by 5 years of specialization. by that time, hes logged thousands of hours of supervised doctoring and he only "graduates" from that phase if all his superiors write stellar reference letters from him.

i dont really care if hes cheated on his first year anatomy exam. not particularly.

beyond that, doctors are held to a higher standard than other people for a reason. medical mistakes are usually final. but grades are a very shitty way of figuring out whether someone is compassionate, ethical and honest enough to be good doctors.

- modern society is engineered for efficiency. i consider it any persons right to make the best of it, for themselves. whats best for an individual is very often not the best for society as a whole. if you follow the script of modern society, you end up miserable. im not a fan of that. ideally, nobody would cheat those stupid exams, but i wont hold it against anyone. ideally, i dont want 10 million indians immigrating to every first world country, either, but i wont hold it against any one individual who tries.

(i dont have a problem with indians, its just that there are a lot of them, and it feels kind of overwhelming at times)


Parent of a surgeon here. My son's medical school did away with grades altogether - it's pass / fail. They still have to pass the board exams every year.


Is it cheating for a professional software developer to use Stack Overflow? If they copy and paste, perhaps, but not if they use it to get a better understanding of something. It's very difficult to grade understanding.

Doctors go through a lot to graduate. Universities are welcome to put all students through the same level of scrutiny, working closely with a professional who can decide if they are unfit for the course.

Aside from thesis defences, they usually choose not to, because contact hours are prohibitively more expensive than a room full of exams and a few TAs.


It's not any more cheating to look at stack overflow, then it is for a doctor to look up more information in a medical reference book. But neither should rely on those resources to do their job for them, with no understanding of the underlying domain. A developer should easily be able to write FizzBuzz without using Stack Overflow.



> In germany, what you call cheating isnt even considered cheating.

I can also attest that this isn’t universally the case. Our faculty’s introductory programming classes explicitly forbade this, and we went so far to even run automated anti-plagiarism heuristics over the solutions handed in.

I’m not sure how strongly I feel about the policy (or whether it’s still current). There definitely were tendencies in such classes that students who already came in with prior programming experience to be eager to »help out« more inexperienced students by sending them their solutions in verbatim, which led to other students copying at least certain program constructs without grokking them and without having exercised more general problem solving skills associated with the task.


In my school one of our professors forces students to do the tests all on paper. We also have a testing center where it's harder to cheat at. There's always people in there watching you, and you likely will be seated somewhere random and nowhere near anyone studying in the same field as you.

I have to say the professor who forces testing on paper really gets the better students in my opinion because they're less likely to cheat (if they even try). They have more reason to try harder to pass.

We also have on our classrooms specialized remote desktop software to see students screens, I've seen someone in one of my classrooms (was not a CS / Programming course, but it was in the same lab) get caught that way, trying to cheat. It was sad, I never saw that kid again after that.


I used to be a tutor for my university's CS department. Cheating increased as our influx of CS students increased. Lots of kids who didn't actually enjoy CS, but had heard it was a lucrative career. It was incredibly frustrating to deal with them in class, and when they asked for "help" in the walk-in labs.

The biggest problem was the department heads unwilling to take on the issue. Most who get caught cheating just fail, but aren't reported for it, and the one lecturer who did take it seriously never got anywhere because when he would report it, nothing ever happened. The more students in CS, the more funding the school gave the department...


Somewhat off topic, but I doubt very much there are any professors at state schools specializing in research that spend anywhere near 55 hours a week on their classes. Maybe he meant per month.


Because introductory CS enrollment has ballooned so much recently, many professors for large introductory CS classes nowadays are full-time lecturers that do not conduct any research. Many full-time research faculty only teach smaller, upper-level classes.

In my experience, this works out pretty well since many of the full-time lecturers do dedicate a lot of their time to teaching (55 hours is totally believable depending on teaching load) and do a really good job of it.


I can confirm that he does. He is a full time lecturer for 3 or 4 classes a semester and produces a staggering amount of material. He is careful to never repeat assignments and each one requires detailing the project, creating automated tests, creating all sorts of frameworks for them(eg, import this jar and pass the instance of your game logic to this black box and you'll have an interactive Swing application to play the game). He plans out his lectures thoroughly and has 2-4 hours of them a day. This doesn't even cover office hours or meetings.

I never really realized how much work a good lecturer does until I was a TA under them. I have so much more respect for people teaching introductory classes now.


So make a very public example of one student :-)


A few anecdotes from college (15 years ago now) regarding cheating/"cheating":

I transferred from community college, where we used C++. The new school used Java, but the C++ classes still counted as credit at the new school, so I was in upper division classes. I asked a teacher about it, they said "theres no way you can survive this class if you don't already know Java." I used one weekend to learn Java, and saw it wasn't much different. However, throughout the semester that teacher would continually say "I know there are cheaters in this class because I know some of you don't even know Java." Uhuh.

My first actual cheating encounter was also that same semester, after I turned in my first project for a class, a classmate asked to see how I did something. Ignorant me sent them my project so they could. However, we had a final project built on the first. After the class ended they admitted they never finished the first, and used my first project as a base for their final project. Lesson learned.

Someone offered me $200 to do a group project with them. Not really cheating, but their intentions were obvious. No thanks.

Of course, plenty of dead weight on other group projects. But nothing as overt as the above two incidents.


Anecdote time:

My friend is currently doing a Computer Science MS at a university that I'm sure most people on HN have heard of. She got accepted with an unrelated undergrad degree (though still STEM) and two Intro to Programming community college courses she took after graduating (which she cheated her way through by paying her roommate at the time to do all of her homework).

She hits me up for help with her homework/projects all the time. I'll help her a bit and then tell her how to look up everything else she needs to know on her own, but she always ends up falling back on another friend, who just completes the assignments for her.

Homework + group projects count enough where she can bomb her tests and still pass. She's about a year into programming coursework now, and still struggles with just about anything outside of for loops and print statements. She is extremely good at math, though, (tbh she's just really smart overall), so those courses and the theory itself aren't giving her any trouble.

I initially told her that spending 3 years getting the MS would be a waste of time for her, and she'd be better off learning on her own or maybe even doing a bootcamp, but now I realize that she doesn't actually care about becoming proficient or even competent when it comes to software development.

She's going to get her MS, and someone will undoubtedly hire her based on a CS MS that she obtained through cheating, and a resume that goes past embellishment and has outright lies. At worst she ends up at a defense contractor that bills $200/hr (and pays her 1/3 of that) for her to just sit around and read Reddit on her phone all day.

I really feel like a huge sucker sometimes for always having been completely honest, both in terms of school and work. I've never cheated and I've never lied or exaggerated about anything related to work, in regards to both job applications and employment. When I think about it, it's cost me a lot.


Well, it depends.

People like this are why companies ask programming questions that are slightly harder than fizzbuzz.

You might be able to get a middling, junior dev position, using her method, but good luck doing anything else with your life.

I certainly slacked off a bunch during school, as I only minored in CS. I was able to get a job, but I've had to work really hard after that to improve my skills, and go beyond the junior dev role (along with having to job hop a bit more than I am comfortable with).

And I also don't have any specialized knowledge, so I don't feel like I can do anything more than web dev.

The thing is, there is much more to a career in tech, than just being a mid level engineer. And it is difficult to go beyond that unless you can really bring useful skills and value to the table.


> it's cost me a lot

I've had jobs where I had to sit around and pretend to be busy. They're awful. I much, much prefer to have challenging work to do.

> My friend

I don't care to have friends like that. Cheaters will eventually try to scam their friends, too. Besides, cheaters are boring losers.


>I've had jobs where I had to sit around and pretend to be busy. They're awful. I much, much prefer to have challenging work to do.

Agreed with this. I haven't been more miserable in my career than when I was just moldering in a corporate body shop.


In this case, I believe, the prevalence of cheating among CS students isn't higher than cheating among students of other courses in the same universities, or elsewhere. Even in public health, occasionally, the prevalence of a certain disease goes up courtesy of technological advancement. The prevalence of that disease in a population that was initially low or unknown suddenly shoots up because the means with which to detect the disease has now become widely available, and not always because the incidence of that disease in that population has only started going up. Now, if other courses offered at these named universities had means of detecting students cheating in such courses, the prevalence of cheating in these other courses would probably be as high as in CS. Universities would thus need to either come up with systems that detect cheaters that has both a high true positives rate and a low false negatives rate, or, come up with a better way of assessing students that does not involve exercises that can easily be "borrowed" from the Internet.


This is exactly it, in my opinion. It's much easier to catch cheating by running homework code through something like MOSS than relying on a human TA to catch that 2 handwritten math assignments are exactly alike. I TAed both CS and Math classes at a large state school and once caught ~20% of a 50 person math class on a homework assignment because they had all copied a mis-labeled problem from the solutions manual. It's probable that they had all been cheating up to that point but I couldn't tell. It turns out, the Math department as a policy doesn't try to do anything about cheating and just encourages lecturers to base the majority of the final grade on exams. I can't say this works much better because I've sat next to a student 2 separate times during exams who obviously had their phone out and was reading from it. The rate of cheating is probably lower in exams but lecturers should be more proactive about it.


At my uni you had to have all electronic devices in your bag under your desk and have all supporting materials displayed on your desk before the exam. There were floor walkers who would go up and down the aisles constantly throughout the exam looking for signs of cheating. If you were caught you were expelled.

A neighbouring uni even gave students lockers and didn't permit bringing anything into their exams.

Exam cheating seems like an incredibly easy problem to solve.


We have similar rules for keeping your desk clear, electronics in your bag. I think the difference is having people walking the floor, which isn't effective if you only have 1 professor watching 50 people, in my opinion. It's also very difficult to know exactly how effective those preventative measures are, either in discouraging students from cheating or catching them when they do. It's apparently still easy enough to keep your phone in your pocket and pull it out onto your lap when you want to look something up. Honestly, I've always assumed it'd be impossible to cheat during those exams but have been proven wrong. In the most recent case, I and the person next to me were sitting in assigned seats in the front row, ~3 feet from the professor. He still wasn't caught, as far as I know. (I mentioned it to the professor near the end of the exam but she didn't catch him in the act). Even if she had, she'd have to be willing to go to the work of reporting him, attending the hearings, and doing the paperwork which historically doesn't result in getting the student expelled if it's the first time they've been caught.

Lockers seems like a good way to prevent it, although phones are still easily hidden in a jacket pocket so you'd have to ban bulky clothing too.


I didn't cheat once through a CS degree gained in the early 90's (as my sometimes shabby, sometimes ok results would show), but I was made hard to think about whether a certain incident was cheating or not. One of my final year subjects involved a group project; we had to develop a PDA-type collection of applications, so it fitted well with a group project. The 'calculator' guy never turned up to any programming sessions or meetings aside from the initial one - someone else ended up hacking together his app in the last day - and then 60 minutes prior to the submission deadline. Back then we had to put together a paper project outline and notes, and there were strict requirements for it's submission format; laid out like so, printed like so, index like this, bound like that. If this paper submission was a minute late, we'd be docked 50% (!) of our overall marks. Four of us were panicking, with < 1hr left to submit, our project unprinted and unbound, and us with no idea how to operate the required machinery. The fifth member, our 'calculator' guy, actually shows up, and then proceeded to, unassisted, blitz through the printing, collation and binding of the project. We eventually submitted a minute or two prior to cutoff, and got a good mark; without the 'calculator' guy, who did no programming whatsoever, we would have been in big trouble. I've always wondered if he went on to be the 'binder guy' or 'fixer' IRL after graduating.


This reminds me of My last year at university. At our final senior project we each had to give a group presentation. I remember this one group that showed off this VB6 program (which was considered outdated language even then). They didn't know how to run the program without opening visual studio and clicking run. They didn't even have basic understanding of compilation/run phases. They couldn't answer basic questions about the code. Obvious to everyone they copied the code. Yet they still passed the class.


I've worked with a tonne of people in the industry like this, particularly in MS land where something is "impossible" if Visual Studio can't do it. Often you'll end up with a million projects for namespacing because they don't know what a folder is.

It's only going to get worse as we have a generation of kids that have only used tablets and don't know what an "exe" is, or even a file.


I find that multiple projects are heavily used in Visual studio, because a lot tooling works better by splitting into projects instead. NCrunch is the example here.

It's easier to enforce dependency rules. A domain model should not have any dependencies on a specific technology, the easiest to way to enforce that is a separate project, with no references.


With tuition what it is these days, kids aren't paying to be failed.


I TAed every semester of college except my first, and went to a small liberal arts school where cheating was legitimately uncommon. However, I'll never forget the following exchange:

Student: Why did you give me a 9/10 on the sorting lab?

tsm: Your shell sort didn't work for n > 7

S: But Professor Smith looked over my shoulder during the lab and said it looked fine.

tsm: And in fact it breaks for n > 7; I don't care if Alan Turing looked over your shoulder and said it was fine, if it breaks it breaks.

S: proceeds to pout at me for literally the rest of the semester, failing to understand how the professor-anointed code could have problems in it


9/10 was a very generous grade for that.

I was saddened when I TA'd an operating systems class and not only caught two groups cheating, but numerous groups with race conditions in their Nachos code (you could fuzz the seed to change the system's execution order between threads and students clearly didn't test that). The professor wasn't harsh enough on either sets of groups, in my opinion.


7/10 would have been more appropriate. ;)


I was an IA for a computer security class. We discovered a number of students cheating by copying answers from a public github repo from the previous semester. We noticed because many students were submitting the same bizarrely wrong answer. Some students had even copied all the typos.

Even crazier, a number of the students had even forked the github repo with their github accounts. That is some terrible opsec. Definitely losing points for that.


I have noticed that these things matter only when someone has been burnt by it once already. So before your class, nobody in their life had even bothered to check for plagiarism.

Had to fail the projects of 1/3 of the class in a community college because of plagiarism. Although to their credit, they at least put in some effort by shifting sentences around and re-drew other students' diagrams. They did keep the same node names and relationships between nodes, though.


I "cheated" in one of my junior level CompSci classes. Forget what the name of it was, but we ended up doing a fair amount with low level debugging/analysis, assembly, etc.

One of the big multi-week projects had a series of increasingly difficult questions about different aspects of a compiled program that you were supposed to figure out. We were told we could use whatever tools we were comfortable with using.

So I used a C decompiler and finished the entire thing in about 5 minutes.


That sounds like an inspired and competent solution to the problem. Given that the article focused on pre-made work shared amongst students, this hardly sounds like cheating at all.


Agreed. In reality, I cheated myself more than anything else. I ended up needing to teach myself some of those skills last year, 15+ years later.


I started out in the very early 80's and first job was a Electrical company government job. First week was an introduction to COBOL (which I already knew) and second week was an introduction to GCOS (which I also already knew).

One of the challenges in the second week was to come up with as many ways of copying a file using the GCOS8 (Honeywell Bull DPS8) shell. Now on GCOS the current workfil was called SRC, so one example was to use that, now one of the other `students` came over and accused me of copying another chaps work, I laughed and so did the chap in question who knew me very well. We thought nothing of it. Later when it was show and tell this chap who had accused me of copying somebody else's work had a glaring error in his work. He had used MTW instead of *SRC. Now this chap was called Martin and had accused me of copying from Steven, who's initials happened to be...SRC.

Sadly this Martin chap ended up getting the best position, turns out he had schmoozed the head guy with BBC micro talk and as he had one, they had a good chat about that and the whole aspect that he was not that good just got overlooked.

Moral being, if your a good person and honest, you do get shafted on from time to time.


I teach at a (somewhat academic) high school, and we've noticed a massive shift in how students think about copying in the past few years - about half of our students think it's OK to share answers for an assessment where it will assist another student (i.e. classical notion of cheating). I'm very curious if this is a mind-shift in younger people that are just a lot happier to share things, and this is just a consequence.

The other thing to consider with this is that the assessment system probably needs to be overhauled entirely - which would be a good thing as there are plenty of other reasons why it's broken.


Here is one big reason why "cheating" happens. It isnt directly related to the high school level, and more applicable to the college, though.

The problem is that different classes/subjects/majors have wildly different expectations with regards to how much you are allowed to collaborate with other students.

For example, in my business and economics classes I took, collaboration is encouraged by the professors so much that everyone just works together on everything, without even asking if it is cheating, because of course it isn't.

And people who take these classes then go on to take a class in a different subject, where you are not in a situation where literally every single assignment is a group project, and they forget to ask if what they are doing is allowed.


I taught computing for a short time, and some of what went on between instructors and students was quite astonishing. In fact I'll go as far as saying that some instructors were complicit in the cheating, either by tacitly encouraging it or deliberately making the material easier or shorter/lowering grading standards to keep "student approval", and grades, high. I didn't --- I was pretty much the exact opposite --- but for every one who didn't, there was probably at least 2-3 others who did (and perhaps not so surprisingly, those were the student favourites.)

Then it's no wonder that, on the other side, we see things like https://blog.codinghorror.com/why-cant-programmers-program/


It's interesting how at-odds the concept of plagiarism in a university classroom is with the industry-lauded values of open source software and reuse of well established implementations. As someone familiar with the university system but who learned software engineering entirely outside of that context, I can't help but think it seems a bit antithetical to a large component of the skillset I's like to see in a new grad I was working beside.


This should be obvious to any person who tutors people online. If you check out sites like https://schoolsolver.com or https://chegg.com they have become havens for programming questions. One could make the argument that it's partly the teachers fault for reusing old programming assignments. On the other hand, especially on something like school solver, there are plenty of programmers, living in 3rd world countries, willing to do original complex programming assignments for a paltry amount of money.


They were havens for traditional engineering questions when I was in college too (Dynamics, Statics, Fluids, etc). Every homework question we had seemed to be available online somewhere and it let to kids simply copying answers instead of simply learning the same material they were copying. It got so bad the whole MechEng dept at my school got rid of homework in favor of problem sets that were solved in a given classroom, at a given time, with a TA supervising.

I can only imagine it is much worse now.


My favorite class which I think I learned the best in, homework was given with answers and partial solutions. It was still graded, but more for "did you do stuff" and "this is what/where you did wrong".

It made homework not a test of how clever you were or how resourceful you were ... "collaborating" with your peers. It made it about solving problems with instant feedback. Waiting two weeks between doing a problem is a lot less helpful.


I agree. Thats how all of my upper level classes were taught once everything got a bit more specific and trustworthy.


When I was studying eng problem sets like this were the norm for courses like statics, fluid dynamics, thermo etc. They were considered optional and you usually collaborated on them in groups - university even encouraged it by helping to advertise/organize study groups.

Mostly you'd sit in group everyone would work on problems independently then "What did you get for problem 3?" "100 kPa" "Yeah that's what I got." No different from looking up answer in back of textbook.

Our assignments in these classes (i.e what counted towards your grade) was usually based off of lab reports i.e. I remember for thermo we had to play around with heat exchangers in various configurations and then compute their efficiency based on measurements we took.


That is what they changed the weekly problem sets to be... except not optional and graded before you left the session. It was a much better system, at least for me.


> It got so bad the whole MechEng dept at my school got rid of homework in favor of problem sets that were solved in a given classroom, at a given time, with a TA supervising.

Sounds like the Khan Academy format. Which is probably a better way for everyone to learn, to be honest. Why spend hours going down the wrong rabbit hole only to get the wrong answer anyway? Might as well have a TA around to help guide your approach.


I may or may not have cheated my way through classes.

If I was to cheat, I would use the following methods:

1. Break up (or merge) functions

2. Switch ordering of declaration of variables and the ordering of methods/functions

3. Change names of variables/functions

4. Create useless variables to hold state (or condense/remove useless variables).

5. Switch ordering of math operations

As long as you take the time to understand the code you are copying, you won't make an obvious mistake like that while(!true) fail in the article.

The general guideline is to use cheating as a shortcut to the answer, not as a zero effort copy paste.

If I had hypothetically cheated using the above method on ~ 15 major code projects, I wouldn't have gotten caught.


if you know enough coding to break up and merge functions, you can also write fizzbuzz and therefore are not the type of person this article and comments are picking on.


At some point it would incrementally more effort to just write the stuff yourself. Most of the time in undergrad, you're just implementing the (provided in pseudocode) data structures and algorithms from the textbook anyway.


So... the easiest way to cheat on programming assignments and not get caught is to google an existing answer and know how refactoring works?

It occurs to me that I may have been "cheating" at the majority of my paid job for years.


At my university, most of the profs that teach our entry level programming courses use MOSS. One prof I had said that it's really good at detecting plagiarism and he automatically refers any infractions to SJA.

I always work alone so I've never been caught, but looking at the grade sheets, it seemed like almost 25% of the students were referred every assignment. I can understand the desire to catch cheaters because as I've progressed into my more advanced classes there's always a few students that really shouldn't be there because they didn't learn anything in the prereq classes.


Most of these articles read: "They cheat because A, B, and C... it could be because of D..."

I would love to read an article which is like: "I cheat because X..." "I cheat because Y..." "We cheated because Z..."

In other words, an article or a study based on an anonymous survey. Straight from the horse's mouth.


I helped people cheat because it paid better than working fast food. I never took an exam, but I did a lot of assignments. I kinda felt bad for some of the students who were taught Java for a couple semesters and then thrown into C for an operating systems class. I also figured that they'd just fail the exam if they didn't learn it eventually (which happened frequently, usually resulting in more work/money for me next semester).


I bought the solution manual for my algorithms and data structures course, because the professor, TAs, and textbook were so incredibly bad that I couldn't figure out how you were supposed to structure the proofs that you were expected to write for the problem sets - and they were incredibly anal about the way that everything needed to be formatted. I just gave up - I'd implement everything in C for fun and to know how it actually worked, and copy over the bullshit parts for the homeworks. I've never seen a class that could have been so much fun taught so badly. When you sit there and watch a tenured professor do nothing more than copy CLRS out on a blackboard, listing by listing...


In my French University, group or individual assignments had a very low coefficient compared to written exams (where it was almost impossible to cheat as this was basically like white board coding and students would be spaced by 2/3 seats between them).

It was in the order of 20%(group/coding)/80%(written exam). This had the effect of eliminating 50% of all students on the first semester for the first year of CompSci (DistSys) Master's Degree. So even if a group cheated on group/coding assignments, they most likely wouldn't go through the written exam. We called it: "The Purge".

From then, very few instances of cheating (topics were changed every year). One group got caught cheating on another one during the second year (iOS objectiveC class) and they got a failing grade.

Almost every professor would use automated code checkers on group and individual assignments. And if you get caught, you have to go through thorough questioning by the professor, eventually getting a failing grade and having to go through the second session of written exams (which was most of the time harder than the first session).

What was unavoidable though were group projects with only one person working on the project and the others doing nothing or just spending time preparing for written exams. This happened to me quite a few times. I didn't mind as I was learning a lot, but it got me exhausted for written exams where I was getting good but not outstanding grades.


Throwaway account, obviously.

I'll add to the corpus of a few commenters talking about cheating:

We're in bioengineering BS/MS/PhD levels, so the issues with cheating are 'serious', in that our designs are somewhat likely going to end up in a person someday (depending on FDA approval, thank god for that). The cheating in our department is staggering and totally ignored. I've brought up these issues before to the department, per the large red warning about it in the student handbook. Nothing has happened, at all. There was a wiki on all the old homework that is re-used from year to year. The department did not care and the admins of the wiki were not expelled, nothing happened to them, last I heard it was still functioning. The tests and finals are re-used every year and you would then be very stupid to not cheat, as the classes are curved. Some classes have a large coding component in them and there are coding 'projects'. I remember a final project of the year (last of 3 total) was to model a honey dollop spreading on an incline, runge-kutte methods and all that. They never looked at the code, just the output graphs of it for the report. I know this because I put an infinite while loop in the code to hang the computer, as I was quite disgusted at the lack of effort in grading at the time, and it wasn't mentioned at all in the committee meetings later. As some test were open computer, it was quite easy to see that everyone was cheating because all the screens were in the same wiki copying the answers there. Just disgusting, really. But the department is benefiting from letting in all these people in and then taking their money, so nothing happens. I know I will never hire anyone out of here, as I know they are just garbage.


Programming requires constant learning.

If you need someone to teach you, it will be hard to keep up.

If you can teach yourself, you don't need college.

I Never went to a university.

I have no degree.

I have the best career I can imagine.


I tried to learn to code on my own growing up and failed, eventually losing interest. It wasn't until I had proper instruction in my intro-level college course that I was able to learn the basics. I haven't had trouble keeping up since then. I have three degrees. I have the best career I can imagine.


I wonder if the most effective cheaters in school end up being more or less successful both as coders and professionally in general.

"Hacking" is in a sense related to cheating (finding easier solutions to problems...).


This happens in every profession. Some people are lazy, some are shirkers, some thrive on politics, some cheat, some steal credit.

As history shows us looking for shortcuts is a perennial human issue that affects a section of any population.

You are unlikely to be a doctor without going through an intensive educational curriculum and exams over a period of time. You will probably have an idea of what it entails, but laziness or other factors could motivate looking for shortcuts. It would be the same for software or any other field but the level of scrutiny and rigour would vary.

The education system, the interviews, and constant oversight by managers are designed to weed out these issues. No individual employee escapes managerial oversight in the real world so this is an organization problem that gets solved at that level and should ideally not be a concern or worse an excuse for a witch hunt by other employees.


What's with the obsession with student assessment over student learning? At the end of the day the student has to perform, and there are plenty of opportunities to build up a network of people that will assert you know your stuff. Why is it that grades are still a thing at this level?


A network of people who will assert you know your stuff, right. Friends will say whatever you want them to say. A guy I worked with, who never shipped anything, had no idea how to code at all, who got black out wasted several times during company events, explicitely propositioned the male CTO (they were both straight), harassed every single women in the company (seriously, like, 100+) in just the amount of time it took to get the paperwork in order to fire him...

And he found a new job at a freagin bank as a senior engineer 2 weeks later. Because "a network of people asserted he knew his stuff".

(As to how he got hired in the first place...people "asserted he knew his stuff" and I assume he was pretty good at tricking people in interviews. Interview process sucked, mind you, but since people will get fussy if you ask them to code in an interview, well...)


The student can go a long way without "performing". They are ultimately given a passing grade to show that they have learned things adequately. The cost that we bear as a result of lax certification is that no-one will trust your grades, and that you'll have to do endless whiteboard exercises at every employer to independently verify what might have been otherwise inferred from a single, rigorous qualification process.

It also creates a prestige problem for whatever university graduated some nitwit who can't code from a CS degree. Over time, it will be harder for other graduates from that university to find work.

I'm amazed by comments like this, frankly. It reminds me of how far computer science is from being a genuine profession. Would anyone say stuff like this about the contents of architecture or engineering courses?


> It also creates a prestige problem for whatever university graduated some nitwit who can't code from a CS degree. Over time, it will be harder for other graduates from that university to find work.

Coincidentally, this is being discussed in another thread today: https://news.ycombinator.com/item?id=14440780

Companies need to filter their applicants because some colleges don't do a good job filtering their graduates.


An undergraduate degree demonstrates to you what the characteristic activities of a field are like, and to observers that you can do those characteristic activities to some minimum standard.

Almost no one proceeds under the illusion that it's a comprehensive education. At best, you'll master a subfield in graduate school.

Continuous grading provides a valuable signal when you need to step up your game, transition to a different field, or drop out. I know several people whose poor CS grades sent them to tracks where they're now much happier and more successful.


Saves time.

It's a lot easier to determine if you are capable from "my GPA is x" rather than having someone sit down and write code for a few hours and doing a code review.


Everyone does code review or whiteboards now in industry at least - it's not as if you're offered a residency or job after you get a degree unless it's a pretty good degree with some actual interviewing.


Interviewing someone who will not pass is more expensive than not doing so.


Its a sunk cost. You won't know if they pass or not until you interview them.


What part is the sunk cost? Interviewing applicants takes time out of a lot of peoples' busy days. They don't have to spend that time if the company doesn't bring the applicant in for an interview.


That's obvious enough. Any recruiting funnel will have a criteria of requirements that the prospect needs to fulfill.

Then they're brought in for a technical interview. There's no escaping that. You don't give automatically give them a job because they listed a 4.0 GPA on their CV and were nice on the phone.


If you weed out a large percent due to low GPA, then you've saved time interviewing those people.


You would also weed out a lot of fantastic talent, who aren't motivated by school or didn't go to school.

In programming, this is a significant amount of people.


Ugh, plenty of people are not motivated by school, but manage to pass it with decent grades. I'd personally stay away from "talent" who can work just on the assignments they have genuine deep interest in.


What percentage of applicants do they offer interviews to? How do you think they whittle that number down?


I've never seen "GPA" as a requirement on an application before. Listing your GPA on your CV might help if its a good one, but I haven't seen data to support that.


Isn't most of the GPA not at all related to CS or STEM I am not sure learning how to do an essay on Thucydides or what have you - will help.


I have never written code at an interview, in nearly 30 years.


At senior levels you will be judged by what you've made and your reputation. If you're a new hire you won't have much of either


> What's with the obsession with student assessment over student learning

Surely you don't believe turning in someone else's work for an assignment counts as learning. At some level, we do need to know whether a student has learned the material. We assign homework because a) it's more efficient than lab assignments b) adult students grasp concepts best by doing. We grade them because we believe more students will do the work that leads to learning, because the students themselves want high grades (well, employers and scholarships do).

I'd be happy to see someone try to teach an intro to CS course in a US flagship state school with homework optional, and grades purely based on tests. Would that meet your definition of student learning?


Education is both a signal to employers and an opportunity to build yourself as a person.

For some it's mostly the former, for others the latter. It depends on the majors chosen and a bunch of other factors.

The research in education is difficult to parse the individuals decision comes from which (some show that signalling does play a big part, though).

My personal opinion is that, as more people get higher education, it becomes more of a signal -- the value of the signal of the degree is lessened, and by extension, the value of not having the degree is lessened even more. See the "Spence signalling model" for more on this.

So you get dumbed down classes in the long run.


For various reasons, people want to be able to evaluate students against each other, which requires a total ordering ( https://en.m.wikipedia.org/wiki/Total_order )

For example, I didn't want to go to grad school or work for a megacorp, so I didn't have to worry about my grades. But a lot of my classmates did want those things, which require large scale impersonal evaluation, so they had to sweat about their grades.


The problem I see is that people don't learn. My university classes were full of people who were copy-pasting Stack Overflow answers everywhere they could and were getting relatively good grades. The same people had no idea what open source was, and couldn't solve problems they encountered multiple times before without SO.


Grades are a thing becaue the people who pay for school curriculuum need a way of quantifying what they actually purchased.


We optimize for what we can easily measure, even when it doesn't make sense to optimize for that thing.


Unfortunately, unless that network of people includes people with hiring authority, it is not very helpful. Employers seem to put very little stock in references.


Having a personal connection is by far the best way to get a job in any field, including software. Employers care very much about references from people they know.


Student at UIC here:

Pretty much everyone in the lower level classes is cheating. Some of it's just "driving five-over" and some is practically murder. I've heard Chinese students talking throughout entire exams in big lecture halls, students leaving lab and immediately finding their friends waiting outside for the next lab section and dishing all the details of the day's lab, and students giving away a little too much information when talking about their strategy for a project.


I was a CS TA 20 years ago and it was the same. Literally everyone was doing it to some degree.

Back then you would hand in printed source code. People would occasionally photocopy someone else's, scratch the name off the title page and write in their own name like I wouldn't notice something that stupid.

I actually caught some phd students in some cheating that was almost as blatant.

The professors generally don't care either. The whole thing burned me out pretty quickly. I realized the school was just full of a bunch of people going through the motions, and CS degrees don't really indicate any particular expertise on their own.


What I don't understand is why anti-plagiarism software they mention seems to compare test solutions in isolation. Only 2 out of 450 made this particular mistake, but where did they sit on the exam ? If they made the same mistake and they sit within 5m of each other, that would be much stronger accusation.

The bottom line: software should take into account not just how similar two solutions are, but how far from each other both students were.


Is cheating also a problem at coding bootcamps?


An additional anecdote from high school three years ago (take it with a grain of salt)

There was a person in my CS class where we built a virtual computer using Python. Almost weekly I saw her cheating, yet during her spare time she devoted that saved time to a side project that won her an award and some publicity.

Starting from then I've learned to focus less on the grade and more on actual projects that are challenging enough that I have to learn.


anyone who relies on cheating never seems to do well during the harder major courses at my school (Texas). A lot of the professors have really rigorous cheating detections, and will make you explain every line of code in person if they suspect something. (At least in honor classes) Also, I don't know what these people expect when they have to do a technical interview?


> Also, I don't know what these people expect when they have to do a technical interview?

They probably don't know they exist, they're fairly unique to our industry. They probably expect interviews to be just like everywhere else, all smiles and dressing well, with generic "when have you been challenged in life" questions.

A lot of them will get hired by big corporates that don't do code tests too, that's why even people with 10 years of experience have to do FizzBuzz.


Every form of engineering will have technical interviews.

I imagine a lot of them use their degree as a ticket to a non-technical job.


Technical interviews or tests? A lot of Devs can talk the talk and still not handle FizzBuzz. Engineers have official tests to get their licence.

As for those in college, it's entirely possible they don't realise the testing requirements either.


I had an incident like this when I was in University.

There was a guy in my ​class who struggled with a major project, and he asked if I could help him with it. He wanted me to show him how to get started, but he would never meet with me, and wanted to see my code. So I stripped out the most critical part of my implementation, and shared it with him. After weeks of silence he asks for help again, I ask him to send me what he had... And it was the exact same code I had sent him weeks earlier with very little changes. He had been asking everyone for their code and trying to copy and paste it together. I ended up going to our professor about it... And he knew who I was talking about before I finished explaining everything. It was not his first time in the class, and he had done it before. He somehow graduated with a CS degree even though it took him a few more years.


What I don't quite understand is why those people study CS or IT in the first place (in my country there's no distinction in naming). I get that some people are good at networking and office politics, and might even be good managers, let's give them the benefit of doubt. But are there, no doubt, studies specifically aimed at educating IT managers ? IT industry is still hot, surely there are some corresponding management courses to go with that ?

That way they wouldn't have to struggle with conceptually hard topics, and employers would get a better idea of a person's skills. Why study IT if you don't like wrestling with algorithms, installing software, configuration etc ? Why pretend to be a techie ?



> Though coding is a foreign language to most people, the principles of plagiarism are the same as with papers written in English.

English is a foreign language to most people too. You expect more from the NYT.


I went to one of the better University of California campuses like 20 years ago. And my friend majoring in CS told me a CS professor caught about a third of his class cheating in their take-home programming assignments. This was in a lower level CS class.

He ran an analysis through turned in assignments and discovered almost a third had patterns that signaled it had been copied from someone else's work. This is before github so the suspicion was that some were copying code from other classmates.

We will always have cheaters.


> He also said that since students could use the regret clause, instructors felt more comfortable going to the honor council when students had passed up that chance.

This is very interesting: the clause that apparently gives a way out to the student is never/seldom used, but it gives the teachers more confidence and easiness to report their students. Its a measure that literally just makes it harder on the students, even if it appears the opposite.

Economics is truly magic.


My engineering courses were filled with cheaters.

Fortunately, these become apparent in the workplace and filter out in the first few years.

Any major with promise of wealth will have those who will resort to ill means to get ahead in the system. Again, fortunately, many are weeded out in later stages.


>Fortunately, these become apparent in the workplace and filter out in the first few years.

"Filter out" aka get promoted to management and earn more than they ever did as developers.


Which is why I think developers should be paid the same as managers. People actually building/creating/maintaining the product/service are at least as useful as people managers.


Try selling that to management


You are assuming the system is perfect.

> Again, fortunately, many are weeded out in later stages.

Why do you care?


> You are assuming the system is perfect.

chose to use the word "many" to imply the current process is imperfect.

> Why do you care?

I'm a recipient of freshly graduating (and interning) engineers who got past the hiring committee.

The project roadmap puts a burden on the other team members when one member is underperforming and/or not forthcoming with their abilities.


I care if engineers fake the learning required for their degrees because I don't want to cross bridges that collapse, take airplanes that crash, live next to leaky or explosive industrial facilities, or rely on faulty medical equipment. Engineering carries a lot of responsibility.


No one talks about the fact that current CS curricula are fundamentally flawed. Too much time is spent in class, and on homework and one is left with little time to do "real" work.


Define "real" work.

There's a lot on HN about the dichotomy between "boot camps are enough for a software engineer job" and "you need to know computer science fundamentals to be a software engineer," but both teeter on the No True Scotsman fallacy.

Granted, I do think that CS curricula should include actually implementing things in production.


>Define "real" work.

Things that generate profit. In 2017, that mostly means CRUD backends and mobile apps.

Before you get upset with my definition, don't take it up with me, take it up with capitalism.


That should be left up to the student to define this.


What's "real" work?


Whatever you want it to be.


Care to expand? This isn't exactly a helpful answer


You should be provided enough time to explore things you are interested in. Better?


What's that mean? As in you have time to just learn on your own, without lectures or structured study? I never had trouble finding time to do that at university. How much is enough in your book? And you could always just take fewer credit hours per semester or something.


These days, at least half of the requests I see come through my CodeMentor profile are blatantly obvious homework assignments.


Why do those students even need to cheat? Harvard has a 97% graduation rate and Stanford 93%.


I used to Head TA CS at Harvard (or "TF", to use their verbiage), so can answer this: the graduation rate is very high, but honors are capped (mainly because the honors rate was at one time out of control). Personally I never really cared for honors, but for many people it can be extremely important.

Because honors are capped at a fixed percentage of the class relative to GPA this means it's vital to maximize your grades, hence the cheating. No joke - I used to get many e-mails from students after grades were released begging for us to bump them up.


Why not turn that to your advantage? Institute a policy wherein they are bumped down if they send an email and their reason for doing so does not fall within pregiven requirements (e.g. clerical error in recording a grade).

An excellent prof at the non-Ivy I went to did this, and it was quite effective. Some small percent of your grade (significant enough to bump you if needed) was guaranteed unless you chose to try and create noise to get moved up.


I think this creates a massive disincentive in order to ask why they didn't get the mark they want. For fear of being marked as a complainer.

At the end of the day you are there provide guidance to get their skills and knowledge to place they want to be. Not to simply filter wheat from chaff.


I gather those statistics only count for "confirmed majors". At my school this meant your first 2 years or so you were "unconfirmed" until you passed certain threshold courses. Namely Calculus 1, and advanced programming courses.

This allows them to inflate their graduation rates by weeding out people in disproportionately difficult courses.


How do you think they got that?


I wonder what percentage of students ever take that regret clause


cheating at harvard?? impossible!


[flagged]


We've banned this account. If you don't want it to be banned, you're welcome to email hn@ycombinator.com and promise to follow the rules in the future.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html


"Troves of code online, on sites like GitHub, may have answers to the very assignment the student is wrestling with, posted by someone who previously took the course."

If the assignments are not changed from year to year, the institution is partially to blame.

A good student, researcher or scientist is expected to do research. When you stumble upon the solution (either by accident or intentional) to your problem, it will become very hard to come up with a different solution. Your solution is very likely to be viewed as plagiarism. Of course, the student should cite the source of their inspiration, but since the instructor may reduce the grade when he learns that the student looked at someone else's solution, this is a devil's dilemma.


This is getting downvoted, but I once had an advanced-level course where the the assignment was to write a peer review for a research paper. We were given 24 hours and allowed to use the internet, so I Googled the paper, found a scathing blog my professor had written and deleted, looked it up on archive.org and found myself with a dilemma about whether to cite it or not. I decided it was better to cite it than risk getting caught, and it worked out fine for me. In hindsight I shouldn't have put myself in that position, but temptation is a cruel mistress.


EDIT: My comment completely missed that there were 100 lines of identical code noticed as well. Leaving below for context.

-------

Does anyone else find the example cited in the article rather flimsy? Out of 450, two students incorrectly used the inverse of a Boolean for a conditional – an error I have seen tens, if not hundreds of times.

Perhaps they used better ways of detecting cheating (some are covered later in the article) for uncovering the collusion in this specific case. If not, this “veteran computer science professor” is going on unnecessary witch hunts.


The two students also shared "nearly 100 identical lines of code". The `!done` was the icing on the cake.


Good point! I missed that and focused on the emphasized Boolean test, which is the first time I have seen any code in the NYTimes.


Trivia point: One of the co-authors of the piece, Jeremy Merrill, is a programmer-journalist at the Times (and previously at ProPublica); I imagine he helped with the code:

https://www.propublica.org/nerds/item/upton-a-web-scraping-f...

http://jeremybmerrill.com/blog/2016/01/flyover.html

http://jeremybmerrill.com/clips/2013/05/updating-dollars-for...


Oddly, it's not the `!done` that is the problem, it's the `boolean done = true`. Which should be `false`


The article implies that the cited instance of text is verbatim identical, which is unusual.


Yeah but the students confessed, and it turns out the professor/administration was right. What do you have to say about that?


... The kicker to me is that as a developer if I am having trouble doing something, I ask the internet to see if someone else has already solved it. If I find the solution on stack exchange, I'm not going to rework the solution just so I can claim that I did the work. I'm probably not going to change a thing other than tweaking things to fit whatever the rest of my code looks like. I will re-write anything I find on something like stack exchange because it helps you remember what you did and you think about how what you're typing out works, but its essentially copied and pasted from someone else's work. I don't know if this counts as "cheating" or something in the classroom - though I certainly wouldn't have an entire application with reworked answers from stack exchange - but its a skill worth teaching to cs students. Heck, people sell books with code snippets intended to ease development for pete's sake, I can't see why using a java cookbook should be penalized in the classroom.


As a JavaScript developer none of this matters to me. Here is the reality I see:

JavaScript skills are in demand almost everywhere even if it is not the primary language or skill of a given job. Almost nobody receives any kind of formal education or training in how to write in JavaScript. If you work for a web based company you WILL at some point be tasked to write in JavaScript.

When developers, who have never touched JavaScript before, are tasked to write JavaScript one of a couple things happen. First, they will attempt to write Java (C# or whatever) in this language because the syntax roughly looks the same. Secondly, they will grab the easiest framework they can find and hope to fill in the blanks.

Essentially, at the end of the day, almost nobody knows what they are doing. In this line of work most people are utterly afraid to write code. We would be in much better shape if our biggest problems were developers or students cheating.

Some food for though: https://www.reddit.com/r/programming/comments/66hpqs/less_th...


What??

This seems like a road that only leads to technical debt brought on by people who, as you say, know nothing about the code they're working on. Why would you feel that's acceptable to a company?

Fortunately software development (often) has little direct impact on matters of life or death, but can you imagine an architect or construction worker who successfully cheated their way through school being brought on and having zero idea of how to weld, or how to predict the weak points in a structure?

Also, I'd argue that using frameworks is different than 'not writing code,' though I do agree that the JS community has become framework-obsessed...


> This seems like a road that only leads to technical debt brought on by people who

This technology is littered by technical debt by people doing everything they can to cheat. We are already there. That line in the sand was crossed years ago.


> Essentially, at the end of the day, almost nobody knows what they are doing. In this line of work most people are utterly afraid to write code. We would be in much better shape if our biggest problems were developers or students cheating.

It's really the terrifying thing. What's more terrifying is somebody that thinks they do know what they are doing. Once in a great while they will be right, but far more often they are wrong, disastrously wrong.


I understand that it's plagiarism when you're supposed to be learning and writing your own code, but perhaps this is the wrong way to teach programming.

It shouldn't be such a big deal if you find an open source library or code snippet on StackOverflow that solves your homework assignment. Maybe the problem lies with the assignments. They could be rotated, rewritten each year, or even randomized for every student, so that you really have to understand the code you're writing (or copying). If you find some code that solves 90% of your problem, and you know how to tweak it to solve the remaining 10%, then that's a far more valuable and practical skill than writing everything from scratch. It's also a great way to learn from other programmers and see what they are doing.

I'm probably completely off-base, since I'm a self-taught developer, so my learning style is very different. But I spend the vast majority of my time reading other people's code, studying documentation, and using open source libraries. "Plagiarism" is a very, very important part of this industry.

It would be interesting to work on randomized coding assignment generators. I've worked on similar things in the past, where my company used it as a filter for software engineering applications. I wonder if any universities would be interested in that. It would completely solve the plagiarism problem, because every student would have to write unique code to solve the problem they were given.


What they call "cheating" is how I do my job as a developer of a 3-person (dev) software company. We make software. People like it and use it. We make money.

Who cares whether the code came from our head or from something publicly available on the internet, or from someone willing to help us? As long as it wasn't an actual violation of the law. There's no good reason to create a separate set of laws for the classroom than the real world.

Can we just accept that this is how software is meant to be developed? Beyond the very intro level, the real assignments and challenges given to students should be about how to utilize, extend, integrate and optimize easily available existing code, not how to reinvent (regurgitate, rather) the wheel. This tests and improves understanding just as well as, if not better than making them implement from scratch. Evaluate the student's understanding and the quality of his output, not how independently they wrote the code.


>As long as it wasn't an actual violation of the law

But it is a violation of Academic Integrity, which is sort of a law you agreed to when signing up for a class. At least at my school it was like that.


He is saying that we should get rid of the law of Academic Integrity, and make classes more like the real world.

It is an interesting idea, the problem being that in the real world, while you can kinda look up the answer, it is rare to find someone who is doing the exact same thing as what you need to do, for thousands of lines of code.

Maybe college would better simulate the real world if you gave students personalized, large problems, that nobody has ever done before(instead of small, toy problems)?


I would have liked a coursework where we had to work on a module, document it, then swap it with another student to do the second half.


with my IR hat on In the real world "bringing the company into disrepute" is gross misconduct and normally an immediate firing offence.


But the examples they give from intro courses are not where you want this type of behaviour.

A student handing in dead code either doesn't understand it or didn't care to go over it after copying. Either way, that's not someone you want "looking up answers" on Google as a professional.


If the structure of the class incorporates using real-world code, then fine. Most upper-division classes allow you to use any libraries you want anyway. But some classes, especially lower-division, are explicitly built around learning a concept from scratch. If you copy the code, you can't learn the concepts.

I agree that teachers should strive to allow "open internet" assignments and tests as much as possible, but it's not always viable. All the best classes I've had were this way, but it's usually quite labor-intensive for the instructors.


> Who cares whether the code came from our head or from something publicly available on the internet, or from someone willing to help us?

I hope you are not implying it's OK for students to blindly copy / use code they don't understand.

Otherwise you will end up with "professional" software developers who can't even solve FizzBuzz :p


It can also be a copyright violation, depending on where the code is copied from.


Blindly copying code doesn't really scale though. It might solve one specific problem, but you still have to organise the code and make it reliable - which you probably won't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: