Hacker Newsnew | past | comments | ask | show | jobs | submit | aclissold's commentslogin

Petition to respell the word as “bluebberry.”

That the prediction engine so strongly suggests there should be two b’s in the middle implies that we instead may, in fact, be spelling it wrong.


It is Bblueberry. Maybe we can get gpt5 to write the petition.


“Quarter” has a nice big plosive at the beginning, which really gives it an impact. The “th” in third pounder makes it sound kind of weak in comparison.

Bigger Burger is great! Aliterates, rhyming, and TWO plosives!


And the more you practice directing your attention, the better your skills of awareness become! You can choose positivity over negativity, joy over anger, love over hate.


This is one of the greatest benefits I've developed from mindfulness practice


Is it the business model, or the people apathetic to its problems?


Business model for sure. Primarily the people who couldn't find a way to make money other than selling their users (at best), and of course the users themselves who knew and still normalized it.


It’s interesting to observe that, if this is indeed caused by a failure in the stall prevention system (and I say “if” since it’s certainly too soon to draw conclusions), the media and discussion around it seems to gravitate towards disabling the system or avoiding the aircraft entirely.

But when we think forward to the inevitable autonomous vehicle accidents that will occur, the conversation turns to how many lives they’ll have saved, and how much safer they’ll be.

Is there a known psychological phenomenon for “negative hindsight, positive foresight” that I can go learn more about?


The problem is that you can't disable that system: the aircraft is not certifiable without, that is to say the risk of stalling is considered too high without that system. Because that system may have caused two planes to crash doesn't mean that it didn't prevent the ~350 other 737-MAX flying from stalling.

So if the MCAS is indeed the cause of that crash, I expect fleet grounding until the system is fixed. Assuming it is fixable, which I think it is, but might require more than just a software fix.


> But when we think forward to the inevitable autonomous vehicle accidents that will occur, the conversation turns to how many lives they’ll have saved, and how much safer they’ll be.

Driving to the airport is orders of magnitude less safe than flying. If a malfunctioning automatic system is decreasing the reliability of flying, that's a huge problem. However, I'd wager that even an autonomous car that's only just able to pass a driver's exam would be significantly safer than a human driver because it would _follow the rules_ and _not be distracted_. Even if that system isn't perfect, it's probably still better than an experienced driver.


Except we test humans with tests designed to show that a human who is capable of generalizing knowledge can do a few things that prove they can do more things than that. If the AI isn’t capable of that then just being able to pass the drivers test might create a very incapable AI


This is what scares me about the push toward fly/drive-by-wire.

At some point your designs start breaking into envelopes where the machine cannot be considered safe once the automated systems fail, making your pilot/highly trained human being powerless in the face of catastrophic system failure.

An uncontrollable tool is not a tool, but a coffin waiting to happen. I don't think any type of "routine" transit system should be designed in a manner such that it so thoroughly overwhelms a human crew's workload that it should be so dependent on automation that it cannot be certified otherwise.

To reword: if it can't be safely flown with the computers off, it probably should not be a design we allow for people transport. Markets be damned. When your margins include human lives lost, economy needs to stop being your primary optimization. Dollars should only be important after you stop being a corpse factory.


Interesting point.

I guess whether it’s human error at the hands of the pilot/driver, or human error at the hands of the engineer/designer, we can never fully remove it from the equation.

Should we just give up? Seems the best we can do is try and mitigate risk, and automated systems condense this risk down into fewer points of failure (i.e., there are less engineers than users!).


The difference is, the pilot is one person, and his life is in danger, so he is dead serious. Designers/engineers work in teams, supervised by managers and driven by market. Responsibility is diluted to the point that evereyone feels they did nothing wrong, although the results are catastrophic.


I am sure that an autonomous system that does not have a manual override will have the same pushback against it.


Why not both?


Limited time and resources


There is a balance. If aesthetics are a limiting factor, as determined by research, then they’re worth improving.


Are you succeeding?


Absolutely!


For me, concocted stories (if done right) are an exercise in human creativity & ingenuity. The characters are an amplification of our own character traits, and the situations are metaphors for our own lives.

If done masterfully—this overrealization captivates me, and the emotions I feel at the end are real. And I can learn something new about myself for it.


All squares are rectangles, but not all rectangles are squares.


Not all terrorists are muslims, just a majority of them in our current time. This is due to the destabilization of the region more than their religion, although the more radical versions of the religion certainly play a part. Similar things occured in Ireland back in the day: terrorism created by destabilization and fueled by radical religious zeal.

Edit: my auto correct sucks


> Not all terrorists are muslims, just a majority of them in our current time

In terms of terrorism in the US, that's not actually true by most counts. The US has seen more far-right terrorists than radical Islamic terrorists. Casualty counts for the Islamic terrorists are typically higher, but there are both more events and more individuals involved in far-right terrorism.

It's still worth noting that in terms of actual risk-per-individual both are astonishingly rare, and if your goal is to save lives that might otherwise be lost to violence there are far, far better ways of doing it.


(serious question) Do any genetic algorithms take something like this into account? Seems like it could be a good way to increase the odds of furthering successful generations.


A lot of genetic algorithms already seed the next generation with copies of the top X fittest individuals, so it's essentially already done.


IIRC, it's common for GAs to have variability as the only source of variation, not crossbreeding. And those that do use crossbreeding usually don't have a concept of the "sex" of an individual algorithm. So, crossbreeding of what are effectively two haploid gametes (a polar body isn't exactly a gamete, but...) from the same diploid individual when mates of the opposite sex are in short supply isn't something that would be directly applicable to most genetic algorithms.

You could build a framework for evolving genetic algorithms where this would be relevant and meaningful, but I'm not sure there is any good reason to bother with all the complexity that would be necessary.


Gotcha. It's not that genetic algorithms are meant to emulate the real-world, but rather that they're inspired by it.


While possible, I imagine it would require carting around more baggage with each variation, and would also require you simulate scarcity. It's been a long time since I played with genetic algorithms a little in college, but I'm not sure they often model making it hard to find a mate.


Yeah, sure they could


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: