Hacker News new | past | comments | ask | show | jobs | submit | blurker's comments login

This comment really nicely captures how I feel about this. There's something to be said about good faith and knowing what the spirit of the agreement is.

There are some comments here saying stuff like "these compliance forms are ridiculous and are often just bureaucratic nonsense" and you see comments advocating for playing dumb and answering in bad faith and there you go.

I see there being a bit of an attitude of "everyone is doing it" to justify also doing it just to compete because you're at a disadvantage if you don't. And that's not entirely wrong but it sucks and I personally will avoid competing in that way. Probably that means not much sales in my career. Or science, but that's another topic...


Yeah I came away feeling like this was clickbait. Based on the title I expected to read something about the app stores quietly injecting telemetry in your extension or something like that. Something outside of the developer's control or being done quietly by default as part of the standard packaging and delivery pipeline.

What the author described was very much not that. What they described was developers making a conscious decision to add untrusted code to their extension without properly verifying it or following security best practices.

A more accurate title would be something like "It's hard to trust browser extensions, developers are bombarded with offers of easy money and may negligently add malware/adware"


Ahh as someone who has built several scraping applications, I feel their pain. It's a constant battle to keep your scraper working.


Bloody shame, OpsWorks was a great service in my experience. I built a few clusters with it before Kubernetes and terraform were a thing.

That said, I heard from folks at AWS that it was not well maintained and a bit of a mess behind the scenes. I can't say I'm surprised it's being shut down given where the technology landscape has shifted since the service was originally offered.

RIP OpsWorks.


OpsWorks was based on a really old fork of the Chef code. I did quite a bit of Chef in my day, but it really only made sense in a physical hardware/VMware virtual instance kind of environment, where you had these "pets" that you needed to keep configured the right way.

Once you got up to the levels of AWS CAFO-style "cattle" instances, it stopped making so much sense. With autoscaling, you need your configuration to be baked into the AMI before it boots, otherwise you're going to be in a world of hurt as you try to autoscale to keep up with the load but then you spend the first thirty minutes of the instance lifetime doing all the configuration after the autoscale event.

A wise Chef once told me that "auto scaling before configuration equals a sad panda", or something to that effect.

Chef did try to come up with a software solution that would work better in an AWS Lambda/Kubernetes style environment, and I was involved with that community for a while, but I don't know what ever became of that. I probably haven't logged into those Slack channels since 2017.

IMO, there are much better tools for managing your systems on AWS. CDK FTW!


> Back when I was a junior developer, there was a smoke test in our pipeline that never passed. I recall asking, “Why is this test failing?” The Senior Developer I was pairing with answered, “Ohhh, that one, yeah it hardly ever passes.” From that moment on, every time I saw a CI failure, I wondered: “Is this a flaky test, or a genuine failure?”

This is a really key insight. It erodes trust in the entire test suite and will lead to false negatives. If I couldn't get the time budget to fix the test, I'd delete it. I think a flaky test is worse than nothing.


"Normalisation of Deviance" is a concept that will change the way you look at the world once you learn to recognise it. It's made famous by Richard Feynman's report about the Challenger disaster, where he said that NASA management had started accepting recurring mission-critical failures as normal issues and ignored them.

My favourite one is: Pick a server or a piece of enterprise software and go take a look at its logs. If it's doing anything interesting at all, it'll be full of errors. There's a decent chance that those errors are being ignored by everyone responsible for the system, because they're "the usual errors".

I've seen this go as far as cluster nodes crashing multiple times per day and rebooting over and over, causing mass fail-over events of services. That was written up as "the system is usually this slow", in the sense of "there is nothing we can do about it."

It's not slow! It's broken!


Oof, yes. I used to be an SRE at Google, with oncall responsibility for dozens of servers maintained by a dozen or so dev teams.

Trying to track down issues with requests that crossed or interacted with 10-15 services, when _all_ those services had logs full of 'normal' errors (that the devs had learned to ignore) was...pretty brutal. I don't know how many hours I wasted chasing red herrings while debugging ongoing prod issues.


we're using AWS X-ray for this purpose, i.e. a service is always passing on and logging the X-ray identifier generated at first entry into the system. pretty helpful for this purpose. And yes, there should be consistent log handling / monitoring. Depending on service we differ between error log level (=expected user errors) and critical error level (makes our monitor go red).


It often isn't as simple as using a correlation identifier and looking at logs across the service infrastructure. If you have a misconfiguration or hardware issue it very likely may be intermittent and only visible as an error in a log before or after the request. The response has incorrect data inside a properly formatted envelope.


I guess that's one of the advantages of serverless - by definition there can be no unrelated error in the state beyond the request (because there is none), except for the infrastructure definition itself. But a misconfig there you'll always see in form of an error happening at calling the particular resource - at least I haven't seen anything else yet.


That's assuming your "serverless" runtime is actually the problem.


You don't even have to go as far from your desk as a remote server to see this happening, or open a log file.

The whole concept of addressing issues on your computer by rebooting it is 'normalization of deviance', and yet IT people in support will rant and rave about how it's the fault of users for not rebooting their systems whenever they get complaints of performance problems or instability from users with high uptimes— as if it's not the IT department itself which has loaded that user's computer to the gills with software that's full of memory leaks, litters the disk with files, etc.


I agree with what you're saying, but this is a bad example:

> Pick a server or a piece of enterprise software and go take a look at its logs. If it's doing anything interesting at all, it'll be full of errors.

It's true, but IME those "errors" are mostly worth ignoring. Developers, in general, are really bad at logging, and so most logs are full of useless noise. Doubly so for most "enterprise software".

The trouble is context. Eg: "malformed email address" is indeed an error that prevents the email process from sending a message, so it's common that someone will put in a log.Error() call for that. In many cases though, that's just a user problem. The system operator isn't going to and in fact can't address it. "Email server unreachable" on the other hand is definitely an error the operator should care about.

I still haven't actually done it yet, but someday I want to rename that call to log.PageEntireDevTeamAt3AM() and see what happens to log quality..


> The trouble is context. Eg: "malformed email address" is indeed an error that prevents the email process from sending a message

I’m sure you didn’t quite mean it as literal as I’m going to take it and I’m sorry for that. Any process that gets as far as attempting to send an email to something that isn’t a valid e-mail address is, however, an issue that should not be ignored in my opinion.

If your e-mail sending process can’t expect valid input then it should validate its input and not cause an error. Of course this is caused by saving invalid e-mail addresses as e-mail addresses in the first place which in it self shows that you’re in trouble, because that means you have to validate everything everywhere because you can’t trust anything. And so on. I’m obviously not disagreeing with your premise. It’s easy to imagine why it would happen and also why it would in fact end up in the “error.log”, but it’s really not an ignorable issue. Or it can be, and it likely is in a lot of places but that’s exactly GPS point isn’t it? That a culture which allows that will eventually cause the spaceship to crash.

I think we as a society are far too cool with IT errors in general. I recently went to an appointment where they had some digital parking system where you’d enter your license plate. Only the system was down and the receptionist was like “don’t worry, when the system is down they can’t hand out tickets”. Which is all well and good unless you’re damaged by working in digitalisation and can’t help but do the mental math on just how much money that is costing the parking service. It’s not just the system that’s down, it’s also the entire fleet of parking patrol people who have to sit around and wait for it to get to work. It’s the support phones being hammered and so on. And we just collectively shrug it off because that’s just how IT works “teehee”. I realise this example is probably not the best, considering it’s parking services, but it’s like that everywhere isn’t it?


Attempting to send an email is one of the better ways to see if it's actually valid ;)

Last time I tried to order pizza online for pickup, the website required my email address (I guess cash isn't enough payment and they need an ad destination), but I physically couldn't give them my money because the site had one of those broken email regexes.


I disagree about extensive validating of email addresses. This is why: https://davidcel.is/articles/stop-validating-email-addresses...


The article you link ends by agreeing with what I said. So I’m not exactly sure what to take it as. If your service fails because it’s trying to create and send an email to an invalid email, then you have an issue. That is not to say that you need excessive validation, but in most email libraries I’ve ever used or build you’re going to get runtime errors if you can’t provide something that looks like x@x.x which is what you want to avoid.

I guess it’s because I’m using the wrong words? English isn’t my first language, but what I mean isn’t that the email actually needs to work just that it needs to have something that is an email format.


> Developers, in general, are really bad at logging, and so most logs are full of useless noise.

Well, most logging systems do have different log priority levels.

https://manpages.debian.org/bookworm/manpages-dev/syslog.3.e...

LOG_CRIT and LOG_ALERT are two separate levels of "this is a real problem that needs to be addressed immediately", over just the LOG_ERR "I wasn't expecting that" or LOG_WARNING "Huh, that looks sus".

Most log viewers can filter by severity, but also, the logging systems can be set to only actually output logs of a certain severity. e.g. with setlogmask(3)

https://manpages.debian.org/bookworm/manpages-dev/setlogmask...

If you can get devs to log with the right severities, ideally based on some kind of "what action needs to be taken in response to this log message" metric, logs can be a lot more useful. (Most log messages should probably be tagged as LOG_WARNING or LOG_NOTICE, and should probably not even be emitted by default in prod.)

> someday I want to rename that call to log.PageEntireDevTeamAt3AM()

Yup, that's what LOG_CRIT and above is for :-)


In my experience, the problem usually is that severity is context sensitive. For example, a external service temporarily returning a few HTTP 500 might not be a significant problem (you should basically expect all webservices to do so occasionally), whereas it consistently returning it over a longer duration can definitely be a problem.


That is exacly what previous commenter meant - developers a bad at setting correct serverity for logs.

This becomes even a bigger proglem in huge organizations where each team has own rules so consistency vanishes.


> I still haven't actually done it yet, but someday I want to rename that call to log.PageEntireDevTeamAt3AM() and see what happens to log quality..

The second best thing (after adding metrics collection) we did as a dev team was forcing our way into the on-call rotation for our application. Now instead of grumpy sysops telling us how bad our application was (because they had to get up in the night to restart services and what not) but not giving us any clue to go on to fix the problems, we could do triage as the issues where occurring and actually fix the issues. Now with mandate from our manager because those on-call hours where coming from our budget. We went from multiple on-call issues a week to me gladly taking weeks of on-call rotation at a time because I knew nothing bad was gonna happen. Unless netops did a patch round for their equipment which they always seem to forget to tell us about.


  I want to rename that call to log.PageEntireDevTeamAt3AM() and see what
  happens to log quality
I managed to page the entire management team after hours at megacorp. After spending ~7 months being tasked with relying on some consistently flakey services I'd opened a P0 issue on a development environment. At the time I tried to be as contrite as possible, but in hindsight what a colossal configuration error. My manager swore up and down he never caught flack for it, but he also knew I had one foot out the door.


> Developers, in general, are really bad at logging

That's not the problem. I'll regularly see errors such as:

    Connection to "http://maliciouscommandandcontrol.ru" failed. Retrying...
Just... noise, right? Best ignore it. The users haven't complained and my boss said I have other priorities right now...


> my boss said I have other priorities right now

Way to bury the lede...


Horrors from enterprise - few weeks ago a solution architect forced me to rollback a fix (a basic null check) that they "couldn't test" because its not a "real world" scenario (testers creating incorrect data would crash business process for everyone)...


Your system could also retry the flaky tests. If it fails after 3 or 5 runs, it's for sure a defect.


This is the power of GitHub actions where each workflow is one YAML file.

If you have flaky tests, you can isolate them to their own workflow, and deal with it as isolated away from the rest of your CI process.

Does wonders around this. The idea of monolithic CI job is backward to me now


> They won't exceed speed limits.

Uhh, about that...

> Tesla allows self-driving cars to break speed limit, again

- https://www.theguardian.com/technology/2017/jan/16/tesla-all...


Because going the speed limit is unsafe in certain scenarios, because other human drivers are expecting people to exceed the limit like they do.

It's the same with Tesla wanting to allow rolling stops at stop signs – not the letter of the law, but it's what humans expect. In many instances you are more likely to cause a minor accident (being rear-ended) by coming to a complete stop at a stop sign than the risk of causing and accident by rolling through an intersection that clearly has nobody else in it.

Won't be a problem once most cars on the road are driverless.


I feel like I should be able to file a class action lawsuit against Tesla if the cars are programmed to speed.


So do you think anything should be legal on the roads by default? I think there is a clear and obvious justification for why we don't let anyone drive anything on the roads: safety. Letting people test whatever they want on our roads is a risk to all other road users.


>>So do you think anything should be legal on the roads by default?

yes

>>I think there is a clear and obvious justification for why we don't let anyone drive anything on the roads: safety.

Safety, the drum beat of the authoritarians for all of human history. Safety is often used to limit freedom, rarely is safety the actual reason for laws and regulations, even rarer does safety increase as a result of the rules

In this case there is no safety issue even being claimed, people are annoyed by them, they believe they take away from other public transit aka they are politically opposed to them, or a wide range of other non-safety issues.

If there is a safety issue that actually endangers others we have many mechanisms to check that including legal liability.

Further with safety you get in the "if it saves one life" debate as well. In short if safety was the only goal we would have no freedom at all, life is about risk management, not ensuring absolute safety. I have no desire to live in a "safe" society where safety first is the goal

To misquote Mike Rowe.. "Safety Third... lots of things come before safety"


> as evidenced by the literally free taxi service that is just being given out to many people in SF right now.

You realize that it's only free while they test and it's not going to be free forever, right? In fact, the protests are in response to a decision which allowed the companies to start charging for driver-less rides, so that is already changing.

Hard to say which way people will go overall. I personally am no longer a fan of driver-less cars. I used to be but now I see it as a doubling down of car-dependence. I could see a nasty rebound effect [1] through increased convenience and a lot of other negative consequences if cities adapt their infrastructure to cater to those vehicles. History repeating itself for "automobile progress." A possible second coming of demolishing our cities and neighborhoods to make way for cars. Yeah I know most if not all of the arguments for how the technology will just fix all the problems and make everything about cars better, I used to make those arguments myself. I don't believe them anymore.

- https://en.wikipedia.org/wiki/Rebound_effect_(conservation)


> You realize that it's only free while they test and it's not going to be free forever, right

You realize that the whole point of self driving taxis is that they are cheaper to run than regular taxis, and that's how they will compete right?

> I used to be but now I see it as a doubling down of car-dependence

Ok, so you just don't like taxis in general then.

That's fine, but it changes nothing about the idea that self driving taxis are strictly better than regular taxis.

> I could see a nasty rebound effect [1] through increased convenience

Oh the irony of this statement.

At the very beginning of you post, you talk about how the benefits of self driving cars are temporary, and yet now you admit that the problem is that they are too good.

I am glad that I have convinced you that self driving cars are such a benefit, such a popular consumer product that everyone will love, so much so that it will cause problems because of just how useful they are!


You're putting a lot of words in the parents mouth.

>> I used to be but now I see it as a doubling down of car-dependence

> Ok, so you just don't like taxis in general then.

That was not the point. One of the general arguments for AVs is that it will reduce car-dependence, that we're all going to be able to live car-free and AVs will shuttle us around in hyper efficient transport systems.

If the way we get there is by having a taxi service that is 70% cheaper, I don't see how that follows. I agree with the parent, the claim that AVs are a ticket out of car dependence seems like a bill of goods.


My claim was that self driving taxis are better than regular taxis because they are cheaper.

The person responded by using arguments that they don't like a car driven society.

That does not contradict my point that self driving taxis are better than regular taxis.

Furthermore they admitted that self driving taxis are more convenient, which further supports my point.


> they are cheaper

Time will tell, but I suspect it'll be economically unviable once the lawsuits start flowing


> You realize that the whole point of self driving taxis is that they are cheaper to run than regular taxis, and that's how they will compete right?

Longterm that will be true if there is substantial competition between driverless car providers (since it requires very significant that won’t necessarily be the case).

Otherwise these companies will just keep their margins high and charge only slightly below what a car driven by a person costs.


> keep their margins high and charge only slightly below what a car driven by a person costs

So you mean like, "the whole point of self-driving taxis is that they are cheaper to run, ... and that's how they will compete?"

You're trying to convince yourself too hard that you disagree with the previous comment.

There will probably be a bit of both: using the margin to seriously undercut the taxis and ubers in order to drive them out, and then raise prices.


> too hard

Why would you say that? I wasn’t trying that hard at all.

It just seemed that you were implying that they will be significantly cheaper for consumers which is not at all obvious at this point.


One of the four commissioners (John Reynolds) of the California Public Utilities Commission who voted to approve the expansion previously worked at Cruise [1]. I'm not sure about the others' backgrounds, but that's already 25% of the vote with a conflict of interest.

- https://www.bbc.com/news/business-66478070


I’m not sure having a board comprised without industry experts is a good idea.


Isn't that trivially a conflict of interest?


Usually it goes the other way: being on the board and then being given a position at Cruise. Having worked for a company in the past doesn't mean you like the company or will be soft on them. In fact, many people hate their former employers. So without anything further than "they used to work at a big company in San Francisco", it doesn't seem like a conflict of interest to me.


> Having worked for a company in the past doesn't mean you like the company or will be soft on them.

This often is implied, actually. Better safe than sorry.


But I'm certain having the foxes guarding the hen house is a terrible idea.


Looks like oil industry astroturfing [1] to me. The group that published this (GWPF) is basically an anthropogenic climate change denying lobby group that refuses to reveal its funding sources [2].

Yeah, color me skeptical about their publication that is essentially saying gas systems are better than electric heat pumps for the environment.

- [1](https://www.youtube.com/watch?v=FOi05zDO4yw)

- [2](https://en.wikipedia.org/wiki/The_Global_Warming_Policy_Foun...)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: