Hacker Newsnew | past | comments | ask | show | jobs | submit | murph-almighty's commentslogin

What state do you live in, and what is the density of your neighborhood (e.g. urban, suburban, rural)?


California rural.


I've similarly wondered if I could get a pre-2024 Wikipedia if just for the "fact based" flavor LLM


Do you think Wikipedia starting in '24 was polluted by AI slop? This is certainly possible, I'm just not aware of it happening.

Wikipedia periodically publishes database dumps and the Internet Archive stores old versions: https://archive.org/search?query=subject%3A%22enwiki%22%20AN...

Plus you could also grab the latest and just read the 12/31/23 revisions.


It was already slop, let's not pretend it is significantly different today.


What happened to wikipedia in 2024?


It's beyond disgusting to me that the "News Bias meter" on the bottom of the article claims this is "unfairly" biased towards the left. Just because it doesn't reflect well towards your side doesn't mean it's biased.


I think that's just a poor UI choice. That seems to be its default position until you vote. Once you've voted for how biased you think the article is, it shows you the "Most Popular Rating" which is currently "Center/Fair".


My pet conspiracy theory is that leetcode is used to exploit imposter syndrome in candidates. After going through an hourlong session where you came up with the less-than-optimal but correct solution, I think it's easy for the interviewers to seem "disappointed" and lowball the candidate with a worse position/salary.

I think this partially explains the phenomenon of external candidates at Google entering Google at the college-hire tier when they may have been repeatedly promoted at their prior employer. Of course the comp is usually more so it's not a big deal...


My heuristic: If you have 5 or more arguments, you should use a config object.


My heuristic is to not be dogmatic like that. A constructor for a triangle class may reasonably want to take 6 arguments (3 x- and 3 y-coordinates). In other cases, even four arguments might be better as a config object if those 4 are all very rarely used.


I always got the impression that downdetector worked by logging the number of times they get a hit for a particular service and using that as a heuristic to determine if something is down. If so, that's brilliant.


It's brilliant until the information is bad.

When Facebook's properties all went down in October, people were saying that AT&T and other cell phone carriers were also down - because they couldn't connect to FB/Insta/etc. There were even some media reports that cited Downdetector, seeming without understanding that they are basically crowdsourced and sometimes the crowd is wrong.


I think it's a bit simpler for AWS- there's a big red "I have a problem with AWS" button on that page. You click it, tell it what your problem is, and it logs a report. Unless that's what you were driving at and I missed it, it's early. Too early for AWS to be down :(

Some 3600 people have hit that button in the last ~15 minutes.


The one true hybrid work model for tech (in my opinion anyway) is to just have everyone meet in-person in some cadence for sprint planning/PI planning/whatever your cycle is. Everybody syncs up every so often, and then you leave everyone the fuck alone while they go work. Zoom can handle one-off meetings for pairing or other quick questions, but planning out work/carving out architecture solutions is something better done face to face.


The odds are overwhelming that you do not and will never work on the kind of problems where such a minute advantage (if it even exists, which I doubt) makes any kind of difference to the bottom line. Most business related coding is at the end of the day exceedingly trivial. Requiring any sort of on-site time is a thought that belongs in the past.


I agree most business related coding is trivial, but all of the things that a software engineer does that surround the coding are not trivial. All of the best software engineers I know recognize this, and all of the less effective ones depend on them to fill in these gaps.

I’m sure this varies by company.


> Most business related coding is at the end of the day exceedingly trivial.

It is technically trivial. Building business software is fundamentally a communication/knowledge problem with technical aspects.


Definitely this, in fact I suspect that some larger organizations that get this right might become more competitive to smaller firms than before the pandemic, simple because meeting happy people will have their opportunities to steal focus curtailed and actual work time for people will be more clearly boxed out.


I've been more and more of the opinion that hobbies are what drive our sense of meaning and no so much work, but occasionally it's hard to remember that when you're dealing with the shittier aspects of work (for me recently, debugging infrastructure issues with limited visibility and just no progress at all).

We (mostly) make enough money to do pretty well, might as well use it to enable ourselves to enjoy _something_ in life. For a while it was improv for me, now it's more powerlifting, but literally anything you can enjoy that's not work.


It's common in fintech for data/ML models to go through similar overview. If you happen to disenfranchise a set of people because your model said not to lend to them, you risk legal jeopardy.

To clarify, I think it's good that this is a practice.


The whole point of the model is to find who not to lend to. You are always going to exclude people by definition.


There are so many ways you can accidentally systematize racism in software like automated lending.

In the past there were explicitly racist policies like redlining. This leads to a historical data set of loan denials to people in specific racial group. If that group has other traits that correlate to their race, e.g. the neighborhood they live in then you could presumably have a model that doesn't explicitly have race as a feature but uses that historical data and some subset of racially correlated features and as a result disproportionately excludes people of that race.


I am not sure how one would remove all ageism, sexism, racism, classism, title-ism, and so on from lending. The whole concept is about making a prediction about the future with sub optimal information, guessing who will default on a loan and who won't. Same goes with insurance.

I have been pretty tempted to lie about where I live in order to reduce my insurance costs. It would reduce the insurance cost by half. It seems pretty disproportionately harsh that I should get lumped together with the people who simply happen to live around me.

Is it possible to make predictions illegal if they are based on historical data from other than the individual customer?


I should clarify, the point is to not discriminate against a protected class.


Tell that to the legislators and prosecutors who create laws and enforce laws against you.


Yes, but we should exclude people for valid reasons, not for their race.


A review doesn't necessarily mean you need to resolve all diversity/inclusion issues. It can merely require that you identify the issues and understand the risks of not resolving them.


I think what OP is saying is that documenting the amount of carbon released into the atmosphere does nothing to actually remove it, and hence is of little value to anyone except whoever invested in the company and makes money licensing the software.


Exactly - while it does allow for measurement where the likely candidates are for scope 1, scope 2 and scope 3 emissions it doesn't do anything to actually solve the problem (which is a hardware problem).

That and there are way too many people who are now building carbon accounting software and the benefits that will accrue to it (similar to greenwashing). If only we could get some people to work in areas where there was real world impact instead of LP returns :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: