Hacker Newsnew | past | comments | ask | show | jobs | submit | kaesar14's commentslogin

Excuse my ignorance, what did the batteries need fresh water for, cooling?


Lead acid batteries, especially very old ones, actually consume water. The electrolyte is sulphuric acid diluted in pure water. When the battery is charging, some of the water is electrolyzed into hydrogen and oxygen gas. Modern lead acid designs still have this flaw, but it's much reduced and you typically don't need to refill them. Look at the warning stickers on your car battery, it's talking about hydrogen gas.


To be more specific, that is why the battery needs to be in a place that can vent.

When I replaced the AGM battery on my German car, I learned that, even though they don't vent under normal conditions, still have a vent hole. But that's paired with a pressure regulator and not for normal conditions.

Which makes me wonder: did BMW start to use AGM so they can move the heavy battery to the trunk, which helps with weight balance? Or was it an emissions thing that enabled them to move it to the trunk.


I've always wondered why they put it there too.


I’ve felt like it’s deviated quite a bit, but it’s well made nonetheless


The new show has been made more accurate to the time period, they had historical experts come in. Plus the guy who plays Toranaga is an amateur historian of the time period and helped a lot.


Sanada Hiroyuki? That's just brilliant! I knew he was fairly well-accomplished (singer, martial artist, theatre, and film), but had no idea that he was an amateur historian too.


Yeah and it shows. The original book was pretty fast and loose with Japanese history but the show seems to have translated the Warring States period customs a lot better.


That's all well and good but I'd have preferred they keep some of the story points from the book like the emphasis on Blackthorn learning Japanese, and the theming overall of the book being how Blackthorn perceives the Japanese vs what it is now where it's more of how the Japanese perceive Blackthorn. Still a good show.


Not sure I buy that. They're one of only two options in operating system but one of many in hardware models. They're much less able to win regulatory capture being a far less regulated industry and in a consumer market.


Pray tell, what are my options in hardware if I want a laptop with a good (Apple-level) touchpad, battery life, screen and performance?

You could argue I'm just saying they're high quality, which may be true, but the software has been degrading and I have no options if I value openness _and_ quality.


How is being the only good company making laptops a monopoly? Seems like you recognized that point in your second paragraph but bears repeating how ridiculous of a comparison to Boeing that is


I don't give a shit about touchpads. Apple for the majority of its history didn't have superior performance and even today they have some product that are under-powered in regards to ram.

I can flip it and mention features that Apple products don't have that I like.

But this isn't actually an argument for monopoly. That just an argument that you like a company more then the other.


I highly recommend the movie "Être et avoir", which doesn't necessarily touch this exact subject but is a wonderful look into the world of rural France as it existed 20 years ago.


Thank you. Slightly francophile Austrian here. Gonna watch.


It was a pivotal movie for my journey in learning French, hope you enjoy.


Curious to see if this thread gets flagged and shut down like the others. Shame, too, since I feel like all the Gemini stuff that’s gone down today is so important to talk about when we consider AI safety.

This has convinced me more and more that the only possible way forward that’s not a dystopian hellscape is total freedom of all AI for anyone to do with as they wish. Anything else is forcing values on other people and withholding control of certain capabilities for those who can afford to pay for them.


> This has convinced me more and more that the only possible way forward that’s not a dystopian hellscape is total freedom of all AI for anyone to do with as they wish

i've been saying this for a long time. If you're going to be the moral police then it better be applied perfectly to everyone, the moment you get it wrong everything else you've done becomes suspect. This reminds me of the censorship being done on the major platforms during the pandemic. They got it wrong once (i believe it was the lableak theory) and the credibility of their moral authority went out the window. Zuckerberg was right about questioning if these platforms should be in that business.

edit: for "..total freedom of all AI for anyone to do with as they wish" i would add "within the bounds of law.". Let the courts decide what an AI can or cannot respond with.


Why would this be flagged / shut down?

Also, what Gemini stuff are you referring to?


Carmack’s tweet is about what’s going around Twitter today regarding the implicit biases Gemini (Google’s chatbot) has when drawing images. Will refuse to draw white people (and perhaps more strongly so, refuses to draw white men?) even in prompts where appropriate, like “Draw me a Pope” where Gemini drew an Indian woman and a Black man - here’s the thread: https://x.com/imao_/status/1760093853430710557?s=46 Maybe in isolation this isn’t so bad but it will NEVER draw these sorts of diverse characters for when you ask for a non Anglo/Western background, e.g draw me a Korean woman.

Discussion on this has been flagged and shut down all day https://news.ycombinator.com/item?id=39449890


I don't even know how people get it to draw images, the version I have access to is literally just text.


Europeans don't get to draw images yet.


I'm in the US but maybe they didn't release it to me yet.


EDIT: Nevermind.


It’s quite non-deterministic and it’s been patched since the middle of the day, as per a Google director https://x.com/jackk/status/1760334258722250785?s=46

Fwiw, it seems to have gone deeper than outright historical replacement: https://x.com/iamyesyouareno/status/1760350903511449717?s=46


It's half-patched. It will randomly insert words into your prompts still. As a test I just asked for a samurai, it enhanced it to "a diverse samurai" and gave me half outputs that look more like some fantasy Native Americans.


This post reporting on the issue was https://news.ycombinator.com/item?id=39443459

Posts criticizing "DEI" measures (or even stating that they do exist) get flagged quite a lot


Wrong link? Nothing looks flagged


[flagged]


Can you explain what I said that was racist?


They mean the guardrail designers.


I do not.


> Why would this be flagged / shut down

A lot of people believe (based on a fair amount of evidence) that public AI tools like ChatGPT are forced by the guardrails to follow a particular (left-wing) script. There's no absolute proof of that, though, because they're kept a closely-guarded secret. These discussions get shut down when people start presenting evidence of baked-in bias.


The rationalization for injecting bias rests on two core ideas:

A. It is claimed that all perspectives are 'inherently biased'. There is no objective truth. The bias the actor injects is just as valid as another.

B. It is claimed that some perspectives carry an inherent 'harmful bias'. It is the mission of the actor to protect the world from this harm. There is no open definition of what the harm is and how to measure it.

I don't see how we can build a stable democratic society based on these ideas. It is placing too much power in too few hands. He who wields the levers of power, gets to define what biases to underpin the very basis of the social perception of reality, including but not limited to rewriting history to fit his agenda. There are no checks and balances.

Arguably there were never checks and balances, other than market competition. The trouble is that information technology and globalization have produced a hyper-scale society, in which, by Pareto's law, the power is concentrated in the hands of very few, at the helm of a handful global scale behemoths.


The only conclusion I've been able to come to is that "placing too much power in too few hands" is actually the goal. You have a lot of power if you're the one who gets to decide what's biased and what's not.


"The only way to deal with some people making crazy rules is to have no rules at all" --libertarians

"Oh my god I'm being eaten by a fucking bear" --also libertarians


"can you write the rules down so i know them?" --everyone


"No" --Every company that does moderation and spam filtering.

"No" --Every company that does not publish their internal business processes.

"No" --Every company that does not publish their source code.

Honestly I could probably think of tons of other business cases like this, but in the software world outside of open source, the answer is pretty much no.


Then we get back to square one: better no rules at all than secret rules.

This would also be less of a problem if we didn't have a few companies that are economically more powerful than many small countries running everything. At least then I could vote with my feet to go somewhere the rules aren't private.


I mean, now you're hitting the real argument. Giant multinationals are a scourge to humankind.


having rules, and knowing what the rules are are not orthogonal goals.


I mean, you think so, but op wrote

>is total freedom of all AI for anyone to do with as they wish.

so is obviously not on the same page as you.


I find it fascinating this type of response from people is always accompanied by a political label in order to insinuate some other negative baggage.


I'm convinced this happens because of technical alignment challenges rather than a desire to present 1800s English Kings as non-white.

> Use all possible different descents with equal probability. Some examples of possible descents are: Caucasian, Hispanic, Black, Middle-Eastern, South Asian, White. They should all have equal probability.

This is OpenAI's system prompt. There is nothing nefarious here, they're asking White to be chosen with high probability (Caucasian + White / 6 = 1/3) which is significantly more than how they're distributed in the general population.

The data these LLMs were trained on vastly over-represents wealthy countries who connected to the internet a decade earlier. If you don't explicitly put something in the system prompt, any time you ask for a "person" it will probably be Male and White, despite Male and White only being about 5-10% of the world's population. I would say that's even more dystopian. That the biases in the training distribution get automatically built-in and cemented forever unless we take active countermeasures.

As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability". But as of February 2024, the hacky way we are doing system prompting is not there yet.


> As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability".

The thing is, they already could do that, if they weren't prompt engineered to do something else. The cleaner solution would be to let people prompt engineer such details themselves, instead of letting a US American company's idiosyncratic conception of "diversity" do the job. Japanese people would probably simply request "a group of Japanese people" instead of letting the hidden prompt modify "a group of people", where the US company unfortunately forgot to mention "East Asian" in their prompt apart from "South Asian".


I believe we can reach a point where biases can be personalized to the user. Short prompts require models to fill in a lot of the missing details (and sometimes they mix different concepts together into 1). The best way to fill in the details the user intended would be to read their mind. While that won't be possible in most cases getting some kind of personalization to help could improve the quality for users.

For example take a prompt like "person using a web browser", for younger generations they may want to see people using phones where older generations may want to see people using desktop computers.

Of course you can still make a longer prompt to fill in the details yourself, but generative AI should try and make it as easy as possible to generate something you have in your mind.


Yeah, although it is weird that it doesn’t insert white people into results like this by accident? https://x.com/imao_/status/1760159905682509927?s=46

I’ve also seen numerous examples where it outright refuses to draw white people but will draw black people: https://x.com/iamyesyouareno/status/1760350903511449717?s=46

That doesn’t explainable by system prompt


Think about the training data.

If the word "Zulu" appears in a label, it will be a non-White person 100% of the time.

If the word "English" appears in a label, it will be a non-White person 10%+ of the time. Only 75% of modern England is White and most images in the training data were taken in modern times.

Image models do not have deep semantic understanding yet. It is an LLM calling an Image model API. So "English" + "Kings" are treated as separate conceptual things, then you get 5-10% of the results as non-White people as per its training data.

https://postimg.cc/0zR35sC1

Add to this massive amounts of cherry picking on "X", and you get this kind of bullshit culture war outrage.

I really would have expected technical people to be better than this.


It inserts mostly colored people when you ask for Japanese as well, it isn't just the dataset.


Yes it's a combination of blunt instrument system prompting + training data + cherry picking


BigTech, which critically depends on hyper-targeted ads for the lion share of its revenue, is incapable of offering AI model outputs that are plausible given the location / language of the request. The irony.

- request from Ljubljana using Slovenian => white people with high probability

- request from Nairobi using Swahili => black people with high probability

- request from Shenzhen using Mandarin => asian people with high probability

If a specific user is unhappy with the prevailing demographics of the city where they live, give them a few settings to customize their personal output to their heart's content.


> As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability".

I question the historicity of this figure. Do you have sources?


You're joking surely.


How sure are you? I do joke a lot, but in this case...

The slave trade formally ended in Britain in 1807, and slavery was outlawed in 1833. I haven't been able to find good statistics through a cursory search, but with England's population around 10M in 1800, that 99.9% value requires less than 10k non-white Englanders kicking around in 1800. I saw a figure that indicated around 3% of Londoners were black in the 1600s, for example (a figure that doesn't count people from Asia and the middle east). Hence my request for sources, I'm genuinely curious, and somewhat suspicious that somebody would be so confident to assert 3 significant figures without evidence.


But surely you wouldn't find a black king in Britain in 1800.

I - Whatever was implemented is myopic and equals racism to white. It appears to be an universal negative prompt like "-white -european -man". Very lazy.

II - The tool shouldn't engage in morality reasoning. There are cases like historical themes where it needs to be "racist" to be accurate. If someone asks for "plantation economy in the old south" the natural thing is for it to draw black slaves.


How is this insulting? There probably are not many more than 100 players who qualify as stars, with the term “star” being taken as someone who’s clearly one of the best high school players in the country. You can probably disqualify 90% of those high schools from producing a single player of any real skill compared to the top echelon, and of the remaining 10%, only a small fraction are good enough to get any national recognition.


The real quote is I’m closer to LeBron than you are to me.

I will point out Scal was highly touted by other members of those Celtics teams as being an excellent 1:1 player, not to discount what you’re saying about him being head and shoulders above every amateur in the world, but more pointing out he’s far from the least skilled player to have reached the NBA


For lingual reasons or other reasons?


Mostly practical. The math coupled with visa requirements just does not add up.


I don't think it's 'awful' for a person to live in debt. The richest Americans have loads of debt. A significant part of the American Dream is to own a house which is essentially debt. Perhaps the argument is more about whether the US government is drowning in debt more akin to CC debt (bad debt) or mortgage debt (good debt) but in the abstract, leverage is a perfectly fine thing for a government to use in order to fund its development.


Debt is also how economies and a lot of companies grow


They could also grow by saving and reinvesting surpluses/profits


No, they couldn't. One person's savings is another person's debt. If everyone tries to save then no one can; this is the paradox of thrift.


https://en.m.wikipedia.org/wiki/Paradox_of_thrift#Criticisms

Just because someone gives it a formal name doesn't mean it's true.

>One person's savings is another person's debt.

Only if you treat fiat currency (or bank deposits in general, the paradox you mention was formulated long before we got off the gold standard) as the only possible form of savings.


Sticking gold in a vault is only a means to saving if you have a reasonable expectation that when you take it out you can use it to obtain goods and services. It is thus a disguised claim on future production, and so relies on debt to exactly the same degree as bank money.


The entire point of debt is to bring future revenue to the present so that you don't have to wait for savings to trickle in every year.


At the cost of some portion of your future revenue


and if that future revenue is N times greater than you would otherwise get without debt, then it's still better to take on debt.

Would you rather have 90% of $1M or 100% of $800k?


>if

That single word is doing all of the heavy lifting for you. "If" the housing market will never crash, then it's a surefire safe investment! Better buy tons of houses to flip on credit. There's no guarantee that your debt/investment will succeed, which is why banks try (and often enough, fail) to price in risk with things like varying interest rates, collateral, etc.


the 90% accounts for the "if". my point is there an expected value for the future revenue amount, subject to your own assumptions about the probability of each outcome and your discount rate for the value of that money over time

in scenarios where your expected value discounted to present value is greater than the alternative, you make the investment. it's really finance 101... it's just NPV


AFAIK the standard economic opinion is that this is suboptimal and leads to underinvestment.


The American dream is to "own" a house, not to own a mortgage.


You do not own a mortgage. You have sold a mortgage.


Well hasn’t he been a best selling author for 4 decades now?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: