Hacker News new | past | comments | ask | show | jobs | submit | nicexe's comments login

I always thought this was a stupid restriction. You can't view the post while authenticated but you can view the post while unauthenticated.

It provides friction for further misbehaving. Imagine you blocked someone who has serious issues with people who #foobar. It's better for you if they can't easily find you and repost your content to their community who also hate #foobar. It's not perfect, but the friction helps prevent drive-by bad behaviour.

No it doesn't. The people who are malicious about it will be using multiple accounts. The block button doesn't stop them. If anything, it provides them ammunition to go "See, this person is a sensitive one, let's add them to the list".

Either your posts are public or they're not. There's a pretty clear distinction between the two, and anyone who thinks otherwise is sorely mistaken. The risk of people re-posting your content is a natural consequence of your aspirations to be popular on social media, and we shouldn't be giving people a false sense of security.


This is the old "it's not perfect, therefore, it's useless" type of argument. No one claimed it's perfect, but that doesn't mean it's useless.

You don't want to interact with me? Fine. Then why should I still see your posts? Yes, some crazy people will go to lengths to see it anyway, but most don't and will take the hint, shrug, and go away.


Well, Twitter's implementation is more provocative than it needs to be. It leaves "This tweet is hidden because the person blocked you" tombstones everywhere which is worse than just showing the tweet but gently disabling the reply button if you're blocked or even hiding their reply threads entirely.

If you're blocked by prolific reply guys in your circle, you regularly have to not just scroll past their censored replies at the top of the reply section, but you see other people's replies to them which compels you to switch accounts to see what dumb thing they said this time. And now you can simply reply to them on your other account.


Tons of stuff can be improved, sure. However, in general I think this is changing things in the wrong way – there should be more control over who you interact with, rather than less.

Twitter ossified their feature set a long time ago, which is not surprising because "stick with what made us big" is a reasonable course of action. In that sense great diversity and more experimentation in different approached with Threads, BlueSky, and Mastodon is generally a good thing (even though I don't really use any of them, mostly out of laziness).


"It's not perfect, it's useless" sounds illogical, though I would first disagree with the characterization of saying that it's not perfect. You're putting words in the mouth of the opponent, straw manning, by having the opponent accept the characterization it's not perfect to be juxtaposed with the opinion that it's useless. I would say that it isn't only not perfect, it's useless.

Feel free to steel man and tell me why you think it's useful. I think the friction it causes is cancelled out by the effect of annoying the mostly well-meaning portion of the people who are blocked while not annoying the truly toxic users who will quickly and easily bypass it at all.

I don't tend to participate in twitter fights. A type of twitter fight that comes to mind is people who work at FAANGs being annoyed that people are criticizing their employer's agenda. I saw this against Google with AMP and with Chrome hiding the path from the address bar in a dev channel release. That isn't really coming out of a place of toxicity. The complainer doesn't really deserve to be blocked, but the FAANG employee has a right to keep their mentions and reply threads clean. For minor scuffles like these, a lighter form of blocking is nice.


I think is the old double edged sword instead.

Ironically Elon's been blocking anyone who talks about his missile programs,

https://www.reddit.com/r/EnoughMuskSpam/comments/1fnvt8n/elo...


[flagged]


Reddit is basically the Internet's sewer.

Nah this is a classic case of the no true perfect double-edged slippery-sloped sword of damocles being the enemy of the good no true double-edged slippery-sloped sword of damocles that shouldn't be thrown in glass houses where the chickens have come to roost.

> It's not perfect, but the friction helps prevent drive-by bad behaviour.

> No it doesn't.

There's really no use in continuing this discussion when one party is unable/unwilling to use precise language to discuss marginal effects. Obviously I presume what you mean is that the marginal effect is too small to be relevant, but discussions with people who round that off to "No it doesn't" rarely go anywhere productive.


> The block button doesn't stop them.

No and nobody claimed it does. Making it just a bit harder and making the other party jump through an extra hoop reduces it though. The extra friction has been implemented on many platforms and it works. Instagram adds friction in a different way, but also claims it has a positive result: https://builtin.com/software-engineering-perspectives/key-be...

You can also see people migrating to other platforms who raise the lack of search / being easily found as a feature. It's not black and white and it's not public or not. There's a whole range of how accessible your content is to parties who will attack you.


Yes, because everyone knows that trolls just give up at the least amount of friction.

A large enough number of them do, which does not fix the problem completely but makes it more manageable through other strategies.

They can’t comment that’s the main point after blocking

Yes it does and I can tell that you have never been piled on by followers of large accounts. Blocks aren’t perfect but they make a huge difference.

Blocks are (were) an easy line of defense for most of the lazy trolling. People could get around it but few bothered.

It might have not been ideologically consistent but it was effective.


The way I think is this: there are infinite computer tricks you can use to do all sorts of things, but how many malicious people are also good enough with computers to know about them?

For example, if all they know is the app, they may even think you NEED the app to see posts, or that you CAN'T create multiple accounts on the same app because they signed up with their phone number instead of e-mail.

Just like the smallest UI hurdle blocks onboarding users, the smallest UI hurdle also stop malicious behavior.

Not every malicious user is the hacker type. Sometimes it's just someone stalking their ex-partner. The malicious user could be an elder, could be a teenager, they could be from the U.S. but they could also be from Africa.

When you consider non-English speaking countries, expertise drops tremendously because any high level information is kept behind a language barrier.


> repost your content to their community who also hate #foobar

This is valid. I don't think it rises to the level of preventing them from seeing my public content. But perhaps a brake on their ability to repost it would be courteous.


How about if you're publicly saying things that offend a whole community, take personal responsibility for that and accept whatever offensive things they say about you in response. People have this dumb idea that everyone else should respect them for what they say in public but they have the right to disrespect others. If you can't handle that, stop saying inflammatory things to the whole world.

"Communities" are easy to offend, especially with bad actors lying and stoking flames.

Communities have given death threats to college students who made bad plays in sports games. https://abcnews.go.com/Sports/ncaa-1-3-star-athletes-receive...

There's a long-established pattern linking bullying and suicide, and huge amounts of tougher-to-quantify lesser damage done. Giving people mechanisms to slow and reduce bullying makes perfect sense.


I'm not talking about bullying, where the person being offended is the only one who suffers from it, but rather things like saying "X is false" which offends believers of X and they retaliate with insults. Yes, people get angry when you offend them or even just disagree with them. If you don't want to cause that, don't offend them. Unfortunately a lot of people are so arrogant about their own beliefs, they feel they have the right to both offend people who disagree and be protected from being offended in response.

Not saying the people making death threats are innocent and of course the law should try to stop them, but often it's not powerful enough so people who don't want that unfortunately have to keep quiet or be anonymous when they want to step on the toes of death-threat-happy communities.


This just means that the only people who will post on your platform are toxic. Users don't want that. Advertisers don't want that. Instead, we can build tools to make life harder for the toxic users to discourage them and reduce their impact.

Again, it's not just bullying. Real world example:

* The quarterback (QB) for Miskatonic University throws an interception on the last play of the game.

* Internet members (trolls) find their Twitter handle. Trolls harass QB, insult QB, and make QB's lives miserable. They publish QB's handle on their forums where other trolls also harass QB.

* QB blocks the trolls

Which scenario is better for QB, non-trolling users, advertisers, and the world?

1. The trolls still see can follow QB and respond to everything they do. Maybe QB can't see their messages, but the trolls are free to harass QB's followers, the staff of any location QB posts to being at, and so on. They continue harassing over and over. For years and years they can see QB's posts and continue engaging.

2. The trolls cannot see QB's posts, follower lists, or engage with them. A few particularly dedicated trolls may use alt accounts but it's a tiny percentage of the original trolling and much easier to manage. They eventually get distracted with some other player who made a newer mistake and leave QB alone.


1. is better because in general because nobody actually knows who's a all-bad troll and who's a worthwhile activist. Least of all the person being criticized. QB himself isn't hurt after he blocks them and can carry on with his life as if it's not happening.

I've heard celebrities say they don't read what anyone says about them on social media. That sounds like a good idea because there's always going to be haters to any popular public figure. Just ignore them and you're fine.

3rd parties being affected? Well stop associating with the widely-hated public figre if you can't handle the heat of celebrities.


Have you ever dealt with harassment? Do you think someone should take responsibility for posts that cause them to be harassed?

Yes and yes. It's called picking your battles. If you're not equipped to stand up for yourself or have anyone else do that for you, you're going to get hurt when you insult someone else's beliefs they they've linked to their identity or even their purpose in life.

Some people tie their beliefs to rejecting your identity. What are you going to do in that case? Stop being gay? Not put pictures of yourself online?

Stop publicizing it to the whole world, yes. The world is chock full of homophobic people and some of them are going to see it if you share it with all of them.

What if you're a holocaust denier? What are you going to do? Somehow not share your belief with the world? Yes! Either that or accept the hateful responses you're bound to get if you do share it.

Don't forget this is all about public posts. Not anyone's private life or stuff they only share with people they trust. There's always going to be a Muslim somewhere who wants to kill you for being a practicing gay, or a holocaust believer who wants to punish you for disagreeing with their it-did-happen belief.

What about an ashiest who ties their belief to rejecting the identities of Christians? There's no end to what people vehemently disagree on.

By the way, you can always change your identity if you really want to. Just because you're gay doesn't mean that has to be your identity. You might primarily see yourself as a citizen of your country or what your job defines you as or your personality or religion or just simply yourself if you don't want to be part of a bigger group.


Holy false equivalency, Batman!

Society does not have to just let the worst people in it be as they are. Neither do platforms. Bullying and hateful abuse hurt the platform - users don't want to be subjected to it, advertisers don't want their name next to it. Blocking and other tools to reduce this vile garbage are positive things.


Who are these worst people and who made you the judge? Is it holocaust deniers? Gays? Muslims? Christians? Vegans? Humans are diverse and contradictory in their deeply held beliefs about right and wrong. What about climate change deniers? They get blocked without saying anything vile - just disagreement.

[flagged]


Not at all. I do think people should share their beliefs even when other people don't like them, and other people should be tolerant of that. But the reality is some of those other people will try to hurt you for expressing a belief they don't agree with.

It's the opposite. People with serious issues (i.e. stalkers, trolls etc.) will continue following on a second account or proxy. Meanwhile regular accounts with legitimate criticisms (for example pointing out misinformation, calling out bias, and so on) get blocked by bad actors and those will not find them anymore and repost because they don't invest time in it.

Blocking mainly prevents regular normal users from seeing tweets. Best example is Lex Fridman who's blocked a million people for no apparent reason. Say something he doesn't like: You're blocked. Never even interacted with him but commented on a topic he doesn't approve of: Blocked. You under-cook fish: Believe it or not, he blocks you.


Not to mention people attempting to slander others behind a block (so the person being slandered has no idea until the damage is already done), or temporarily unblocking to say something to someone, and then blocking again.

> Not to mention people attempting to slander others behind a block (so the person being slandered has no idea until the damage is already done)

I saw someone once post about their outrage over the practice of criticizing someone on Twitter without tagging the person you're criticizing in your tweet. ("Subtweeting.") Apparently the thinking goes that if anyone anywhere says something about you, you have the right to be notified.


I'd argue that it really depends on the kind of criticism. What I had in mind was more along the lines of accusing someone of wrongdoing rather than just criticizing them.

I think tagging someone when you're accusing them of wrongdoing is the fair thing to do, considering how quickly that sort of thing can whip others into a frenzy. I see a lot of this sort of thing, where someone will accuse someone of heinous things like pedophilia from behind a block, and by the time the person being accused understands what's going on, they're being hounded by people who get off to drama about why they haven't denied the accusations yet.

On the other hand, tagging when simply criticizing someone often feels like attention-seeking to me. I see this a lot with space stuff, where someone will offer (often completely ignorant) criticism, while pinging Musk, Bezos and/or other figures in the space discussion community.


https://reason.com/volokh/2019/11/20/academic-subtweeting/

"You can talk to me, but it's unethical to talk about me."


It's kind of the opposite of this quote from Christian Bale: If you have a problem with me, text me. And if you don't have my number, you don't know me well enough to have a problem with me.

If it's one tweet that's blocked, is there really much damage/slander? If people actually start talking about it then the affected person will get notified anyway from one of the non-blocked accounts responding.

Those are all issues as well, but brigading is absolutely a thing.

Blocking doesn't really fix the brigading problem, regardless of whether people need to log out or not.

Most X users will just post a screenshot of the tweet, breaking accessibility in the process and disassociating the original author from the thread against their will.

This isn't always a good thing, as it leads to people being surprised by crowds of strangers suddenly screaming at them and not being able to see the source of their anger.

Some people do this as a "preventative measure", so that their post still makes sense when the original tweet is deleted.


> Blocking doesn't really fix the brigading problem

Automated blocking absolutely solved most of the problem. In the late 2010s it was common for political accounts to use scripts that went through follower graphs for their worst repliers and blocked everyone, or went through the list of people liking a certain tweet and blocked all those accounts.

They were quite fond of that approach and were happy with the outcome pre-Elon. Even if someone in the "bad group" screenshotted a tweet from their target to make fun of it, the target didn't really get bothered by it because they walled themselves well enough. The screenshotter is incentivized to not interact with their target so they don't get blocked again, and no one excited by all the dunking cared enough to go harass the target anyway.

Now as someone who found themselves in a few blockchains during peak Bernie-mania, I like the proposed change. I've been blocked by several popular accounts because of who I followed, and I will enjoy being able to reliably read someone's content even if I'm not allowed to interact.


I enjoy your irregular use of the word “blockchain.”

> Most X users will just post a screenshot of the tweet, breaking accessibility in the process and disassociating the original author from the thread against their will.

But that means the blocking worked. Another person will now have to go to the extra effort of either finding that tweet or going directly to the profile to interact with them. And those extra steps were exactly the feature the blocking provided. It changes "click reply, type 'kill yourself you <slur> <slur>'" into "login into non-blocked account, retype part of the text from the screenshot, search, find the matching tweet, reply, type". And that's a lot of work for a quick response.

Sure, it won't stop everyone. It reduces the effects though.


You could design Twitter in a way where handles in quoted tweets aren't clickable if the quoter is blocked by the quotee, but the quotee can still be notified that they've been quoted by somebody they blocked, and optionally choose to see the post. Same for deletion, you could make quotes literally include the original post and preserve it forever, but notify viewers when the original is deleted.

> post a screenshot of the tweet, breaking accessibility in the process and disassociating the original author from the thread against their will.

I'm surprised that the platform does not do ORC on text images by default.

And it wouldn't be hard to check if a particular image looks like a tweet and, if it does, find out the exact match.


Just to be clear they still won't be able to interact with you after they're blocked. The only change is they can keep seeing your tweets directly in their account, as opposed to having to log out.

This I think is good. I don't really care about what is shown, because in my opinion, twitter discussions are rarely worth anything, but I get the point of 'more visibility = good'. I just dislike brigading. And I don't follow anything political on Twitter, I used to follow cybersec and US sports, still there you had brigading (and sometimes attacks got _really_ personal, like a bunch of people making dogwhistle comments about someone who criticized their favorite basketball player because he looks ashkenaz. I think he's a tv personality now btw).

To be fair, you can barely view any post while unauthenticated these days. Sometimes I click on a link to a tweet on my work laptop (where I'm not authenticated) and I get immediately assaulted by several pop-ups and cookie bars and redirected to the landing page when try to dismiss them.

You can create a new account. It's a poorly designed feature that degrades the user experience semi-permanently and with no recourse.

Is "recourse" a good pattern for preventing abuse and harassment? Do any other platforms implement "recourse?"

Being blocked by like "pmarca" forever is kind of silly.

I have given up trying to view content unauthenticated, I used to be able to close the popups but now I get redirected to a login when I do.

I have given up trying to view content unauthenticated

Same. I’ve accidentally made it 18 years without creating a Twitter account and there’s no content compelling enough for me to want to break my streak.


You can insert "cancel" between "x" and ".com"

Reddit's implementation is even worse. It breaks all sorts of commenting/replying even down the comment chain, if anyone in the parent is blocked. And, the mobile app shows no usable errors. But, you can log out, or even make a second account, and see everything.

I block people on Twitter all the time, but I can't even remember the last time I had to block anyone on reddit (it was years ago). This speaks to the different models between the two -- on reddit I'm only interacting on specific subreddits, which because I've chosen them, have much nicer and more reasonable people than "all of Twitter". Twitter is always just nonstop fighting and yelling.

Reddit has a lot more people who block because they want to get in the last word and prevent you from replying.

It has become weaponized. If someone blocks you, they can see your posts but you can't see theirs, and there is a good chance they will block you if you get into any kind of long argument where they feel personally insulted that they are being disagreed with. Once you are blocked you cannot block them, so they will always see your posts but you can't see their their posts and there is no recourse for this, it is just is, forever. So you end up proactively blocking people so that can't happen.

Someone needs to analyze this as game theory and write a paper on it. It is so poorly thought out and implemented that it would be funny if reddit didn't have a monopoly on long form written discussions on a lot of topics.


Weird, and TIL. I haven't blocked on reddit in years and haven't felt the need to, and also can't say I've noticed anyone who's blocked me.

BTW, you can block users that have blocked you: https://www.reddit.com/r/help/comments/sp0zdb/how_do_i_block...


You can block users that block you -- if you know their username. But you don't know who has blocked you, and if they block you, you don't get to see their posts any more/

And you wouldn't notice if anyone blocked you -- that's the whole point -- but entire posts and threads could be right under your nose and hidden.

Regardless of whether it affects you or not -- it is ill conceived, does not accomplish its stated goal, and is easily abused by bad actors and thus should be heavily revised or removed.


> You can't view the post while authenticated but you can view the post while unauthenticated.

As per the article (emphasis mine):

> While a source at X told The Verge that the platform is making this change because people can already view posts from users who’ve blocked them when using another account or when logged out, several of us at The Verge (myself included) have noticed that X actually prevents you from viewing someone’s profile if you’re logged out.

For my part, I can see some accounts but not others, though the “rule” is not clear. Even then, on the accounts I can see they only show a dumb disjointed list of tweets ordered by popularity regardless of post date.


AFAIK, you can't view any tweets unless logged in these days. But the sort of user being frequently blocked likely has multiple accounts anyway.

> AFAIK, you can't view any tweets unless logged in these days

You can view a single tweet, but it won't display a thread for you unless you log in.


There does seem to be an exception for public outlets.

I clicked a link to NYPDs twitter and didn’t have to AuthN. Makes sense too; every org who wanted their content to be fully available to anyone would leave if twitter mandated login

(Although they still use Facebook)


> every org who wanted their content to be fully available...

Every single public account I've visited still has the feed in non-chronological order which instantly makes it useless to me. I don't care about a post from my city government in 2019.


In theory you can't see any tweets at all, because they're now just "posts".

But everybody still calls them "tweets" because it was an amazing bit of branding.


I will always call it twitter & always call them tweets.

I thought so too, but also fun to see people posting they've been blocked by person they're debating... well arguing with, whether the blocked deserved it because they're being an ass, or whether the blocker was simply thin skinned. I think the latter, seeing people rage quit because they can't rationalize their position, is actually pretty useful signal.

The old blocking also stopped your posts from appearing in their "for you" timeline. The new blocking doesn't.

This is pure speculation. It's not implemented yet and there is no detailed explanation of how it will work. It could easily still prevent your posts from being algorithmically recommended.

The restriction isn't on them seeing your posts. It's on you seeing their posts. If you don't want them to see your posts, Twitter provides that functionality too. It just isn't called "blocking".

What's Twitter call it?

It's called protecting your account. A protected account can only be viewed by followers, and followers have to be approved by the account.

https://help.x.com/en/safety-and-security/public-and-protect...

https://help.x.com/en/safety-and-security/how-to-make-x-priv...

(Note that, because it's always possible to create new Twitter accounts, a whitelist is the only way to prevent someone from seeing your posts.)


But the point is you can’t see these posts from a particular account. It makes interaction between the two accounts a bit more difficult and so a bit less likely.

But interaction between accounts is still not possible?

The blocked person can obviously create a new account and bypass that block to some degree, but as other have mentioned it will prevent them from reposting on their main account

Unless they do a screenshot. Not a big deterrent.

Yes, the article says “engagements are still not allowed under blocks”. Then again, interaction in the general sense can still happen (you can always take a screenshot and post that).

It might look like this, but thinking about it, I guess it's still useful in reducing unwanted interactions. People are lazy, and thus adding even a little friction can help a lot in preventing them from doing stalkering, spreading hate etc.

I.e. of course it's possible to login with another user, find the one who blocked you, make a screenshot or something and then quote it or perform any other interaction in your main account. But it's obviously not very easy.

So I'm sure it worked as a solution to reduce negative interactions on the platform. However, Musk doesn't want reducing these, his goal is spreading chaos and forcing his narratives, so the decision totally makes sense for him.


> his goal is spreading chaos and forcing his narratives

It may be as simple as revealing blocked content is a short path to increasing outrage and thus engagement. Like, I could see Facebook doing this on Threads.


They should have had an "ignore" feature from the start, as well as block. They can post all the vile they want, I just don't want to read it.

But IME, the kind of people I want to block are the exact kinds of people that would go through all that effort to keep trying to cause drama.


Isn't it a bit weird that a person can share a screenshot of a tweet by an account that has blocked them?

It shouldn't be that hard to check if an image looks like a tweet and, if it does, find out the exact match.


[flagged]


I am actually very sure that Elon is an extremely smart person. I can also see that he amplifies and posts disinformation posts that spread hateful narratives. That's why I make a conclusion that his intentions are reaching his goals in this, quite evil and harmful for the society way. Even if I liked him, this still would had hold true.

Reddit works the same way. Blocking is so stupid anyway.

A well behaved blocked person will honor the restriction but this only antagonizes the unhinged who usually have multiple accounts.

Something that less aggravates people prone to bad behavior is the right move.


One of the dumber features on the internet. It permanently degrades the user experience and with no recourse.

It only degrades the user experience if you're trying to interact with someone who blocked you. Which is the point.

Although old, this is the currently valid version.


the date tag on posts is not to indicate whether an article is deprecated vs current. presumably nearly every article posted is currently true (that's the "news" part). it's useful to hint whether it's interesting as a new thing, updated thing, or old but of current interest for some reason.

for the same reason that clickbait headlines draw you in, dates are good to wave you away.


In Greece and Cyprus, capers come in medium, big and gargantuan jars. Really, you can find jars with 3.9Kg of capers!

Greeks/Cypriots usually have them either as a side or in their salad.


Very impressive.

The only thing that looks a bit bad is that the models produced need to be fixed before being able to be sliced.


Indeed. It looks to me like the export here is probably ThreeJS STLExporter? It is known for creating non-watertight models, unfortunately.

PrusaSlicer seems to do an adequate job tidying these up automatically.

I think it's not particularly uncommon for STLs exported from common CAD packagers to have some of these issues, though this is a lot.

Super-nice otherwise though -- a neat design. And in general I think client-side tools like this have a lot of potential. Three.js opens up potential for doing task-focussed things without using OpenSCAD on a server or a full emscripten build of OCC (like Cascade Studio does)


I only skimmed the article. But my stance is, anyone that ever used cherry-pick or reverted multiple commits is impacted by clean git history. Unclean git history could make a simple 2 minute cherry-pick or revert task to a 30 mins+ job.


Looks like a deconstructed optocoupler while also using an optocoupler.

This is one of these rare instances where you can have hardware recursion.


Took me a bit to see what it was actually doing. In the doc folder it has some pictures. It appears to be sort of a linear actuator that pushes the spaghetti forward or backward, which makes/breaks the light path in the optical sensor on the other side.


Oh

I didn't read beyond the README and I thought it's using the sphagetti as a fiber optic cable with an LED on one side and a photoresistor on the other


Right now its the date of birth. Later, it will be the applicant ID.


I have one of those wireless magic mouse that takes 2x AA batteries. The optical sensor started behaving in a weird way back in 2018 I think, so I stopped using it. I wouldn't mind risking breaking it by trying to fix the sensor (cleaning the lens from the inside is probably enough), adding a rechargeable battery (and whatever circuitry that entails) and maybe even enhancing the ergonomics if it still works after that.


Is the title the actual article? The only extra info in the page is a semi-unrelated pic.


Looks like it's only available to members with an account, but accounts can only be obtained via "contact us" (and probably a very expensive subscription).

Here's an "unlocked" link (good until Nov 19th): https://www.nationaljournal.com/s/723252/the-fcc-voted-to-re...

Snapshot: https://web.archive.org/web/20231110203104/https://www.natio...

(I used the Wayback Machine because both archive.is and archive.ph are endless captcha loops.)


Here is a backup of the unlocked link: https://archive.is/hBf2Z


That's an endless cloudflare captcha loop, but ty


> That's an endless cloudflare captcha loop

Yeah. It's an ongoing issue (but not the old Cloudflare one).

There are workarounds but they have to be put in place.

ref https://news.ycombinator.com/item?id=38171778

ref https://news.ycombinator.com/item?id=38063548#38063580


It's a fake spite captcha.


Had this issue a few days ago changing DNS to quad9 seems to resolve it


Sadly, I can't get past the "continue" button prompt.


There should be no need, you can just use uBlock to remove the entire prompt, since it doesn't scroll-lock the page.


I mostly agree with the points but I hard disagree with point number 3.

Clean code makes the project more easily maintainable. We generally try to keep a standard in code quality (and I would say 98% of the codebase we touch is well written). We also try to schedule refactoring rounds (but that doesn't always happen because of time constraints).


He didn't say clean code was bad, he said nobody cares about your clean code. I assume he meant outside the development department.


That isn't true either though. Sure, they don't care about it as a first order thing, but they care about development not slowly grinding to a halt over time. And writing well structured code is one of the things the software side of the house does in order to do a better job delivering on that desire from the business. If nobody on the technical side has the credibility or trust to make that case, then that's a problem.


>Sure, they don't care about it as a first order thing, but they care about development not slowly grinding to a halt over time.

Most only care when it affects them or sales, not when devs are asking to allot time to clean up code or pushing off a release to fix wonky stuff. In my mind that's not caring.

That's like people care that they can't walk up stairs without huffing and puffing, but not enough to actually diet and exercise. That's not actually caring, that's really regret.

I'm fortunate though, my company gives a lot of credence to dev.


That situation sounds to me like the problem of the development org not being trusted by the company's leadership when they say "this will slow down short-term initiatives but speed up long-term ones".


Yes, it's pretty common in my experience. Of course executive bonuses are granted based on short-term initiatives more so than long-term ones. When they finally reap what the sow, just blame dev. You don't get to be an executive without knowing how to politic.


Oh yes, I agree it is common! But not at all universal.


In the article:

> Don't get me wrong, people will expect you to write good and clean code.

I can agree with this. Clean code is not "celebrated" because it is expected as normal. You won't get a raise for it. You could get problems for not writing clean. But when the business gets into a tight spot, they will accept shitty code that fulfills their desired goal over a nice clean and elegant solution delivered few days later. In this case, the shitty code could get you a raise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: