Hacker Newsnew | past | comments | ask | show | jobs | submit | timmg's commentslogin

Dumb question, but: is there a way to find/filter ones that are? (I can't seem to find anything in the (web) UI that makes it clear which books are downloadable.)

There wasn't when I went through my collection. Reading the announcement from Amazon it looks like the existing DRM free books will not be automatically flagged to be downloadable.

The publisher/author will have to go through a process to have their books be downloadable again.


> It’s not that surprising that many successful people seem to be strong fans of heritability, or more broadly, of the idea that metrics like IQ point to some sort of “universal independent” metric of value.

I agree that that could be a motivation. But I would also say that having a motivation for a given result doesn't preclude that result. That is generally true in science.

I'm not an expert. But there seems to be fairly overwhelming evidence that some significant amount of intelligence is heritable. That IQ is a reasonably good measure (or proxy) for intelligence. And that IQ correlates well with a lot of other things like educational attainment and income.

That doesn't mean that your genes determine your future. But it does suggest that some people are "born" in a better position than others -- aside from their socio-economic status.

This shouldn't be controversial. Height is well-known to be heritable. Being tall gives you a better shot at making the NBA. The same is true for many other things.


> This shouldn't be controversial. Height is well-known to be heritable.

I don't understand why so many commenters here are arguing against a straw man. The article author does not and never did believe in the "blank slate" theory. The author has a "centrist" view that genes matter but are not the only determining factor.


I was responding to the previous comment, not so much the article.

> The author has a "centrist" view that genes matter but are not the only determining factor.

Nobody thinks genes are the only determining factor (that's a straw man on the other side :)

Most people agree it is somewhere on a continuum. Some people think it leans more one way; others the other way. Some people want it to lean more one way; others want it to lean more the other.


> I was responding to the previous comment, not so much the article.

How so? You said, "This shouldn't be controversial. Height is well-known to be heritable. Being tall gives you a better shot at making the NBA. The same is true for many other things." But there's no indication that the previous comment was arguing the opposite of that. Rather, the previous comment was arguing against this idea: "Surely success and intelligence is just an inborn thing, and thus inevitable and unchangeable. There’s nothing they can do, and it was always going to end up that way. Inevitability erases any feelings or guilt or shame."


I said quite a bit more than what you quoted. And I find your interest in my comment and why I made it... odd.

I'm sorry if I didn't get my point across in a way that satisfies you. But I suggest you take a step back and re-read what both of us wrote. Or maybe just move on.


The author questions whether genes are a meaningful factor, in the large, and comes down against it. I don't think that makes them a centrist; I think they're just rejecting a caricature (the "blank slate") laid out by people strongly invested in the idea that intelligence is determined genetically.

> At 30%, one does observe a faint correlation between genetic potential and IQ. The correlation becomes clearer at 50%, while remaining quite noisy. This is an essential aspect to keep in mind: 50% may sound like a solid heritability figure, but the associated correlation is rather modest. It’s only at 80% that the picture starts to “feel like” a line.

My understanding is that the author thinks the heritability of IQ is somewhere between 30% and 50%, but not 80% or 100%, and not 20% or 0%.


I'm not reacting against the article, but the people mentioned in the article that the author is critiquing.

That hasn't been my experience at all. I always wondered if we just get used to how to prompt a given model and that it hard to transition to another.

They will, I'm sure.

The big difference is that Google is both the chip designer *and* the AI company. So they get both sets of profits.

Both Google and Nvidia contract TSMC for chips. Then Nvidia sells them at a huge profit. Then OpenAI (for example) buys them at that inflated rate and them puts them into production.

So while Nvidia is "selling shovels", Google is making their own shovels and has their own mines.


on top of that Google is also cloud infrastructure provider - contrary to OpenAI that need to have someone like Azure plug those GPUs and host servers.


I am pretty sure OpenAI has data centers of its own.


The own shovels for own mines strategy has a hidden downside: isolation. NVIDIA sells shovels to everyone - OpenAI, Meta, xAI, Microsoft - and gets feedback from the entire market. They see where the industry is heading faster than Google, which is stewing in its own juices. While Google optimizes TPUs for current Google tasks (Gemini, Search), NVIDIA optimizes GPUs for all possible future tasks. In an era of rapid change, the market's hive mind usually beats closed vertical integration.


Aka vertical integration.


Selling shovels may still turn out to be the right move: Nvidia got rich off the cryptocurrency bubble, now they're getting even richer off the AI bubble.

Having your own mines only pays off if you actually do strike gold. So far AI undercuts Google's profitable search ads, and loses money for OpenAI.


> AI ... profits

Citation needed. But the vertical integration is likely valuable right now, especially with NVidia being supply constrained.


So when the bubble pops the companies making the shovels (TSMC, NVIDIA) might still have the money they got for their products and some of the ex-AI companies might least be able to sell standard compliant GPUs on the wider market.

And Google will end up with lots of useless super specialized custom hardware.


It seems unlikely that large matrix multipliers will become useless. If nothing else, Google uses AI extensively internally. It already did in ways that weren’t user-visible long before the current AI boom. Also, they can still put AI overviews on search pages regardless of what the stock market does. They’re not as bad as they used to be, and I expect they’ll improve.

Even if TPU’s weren’t all that useful, they still own the data centers and can upgrade equipment, or not. They paid for the hardware out of their large pile of cash, so it’s not debt overhang.

Another issue is loss of revenue. Google cloud revenue is currently 15% of their total, so still not that much. The stock market is counting on it continuing to increase, though.

If the stock market crashes, Google’s stock price will go down too, and that could be a very good time to buy, much like it was in 2008. There’s been a spectacular increase since then, the best investment I ever made. (Repeating that is unlikely, though.)


How could Google's custom hardware become useless? They've used it for their business for years now and will do so for years into the future. It's not like their hardware is LLM specific. Google cannot lose with their vast infrastructure.

Meanwhile OpenAI et al dumping GPUs while everyone else is doing the same will get pennies on the dollar. It's exactly the opposite to what you describe.

I hope that comes to pass, because I'll be ready to scoop up cheap GPUs and servers.


Same way cloud hardware always risks becoming useless. The newer hardware is so much better you can't afford to not upgrade, e.g. an algorithmic improvement that can be run on CUDA devices but not on existing TPUs, which changes the economics of AI.


> And Google will end up with lots of useless super specialized custom hardware.

If it gets to the point where this hardware is useless (I doubt it), yes Google will have it sitting there. But it will have cost Google less to build that hardware than any of the companies who built on Nvidia.


Right, and the inevitable bubble pop will just slow things down for a few years - it's not like those TPUs will suddenly be useless, Google will still have them deployed, it's just that instead of upgrading to a newer TPU they'll stay with the older ones longer. It seems like Google will experience much less repercussions when the bubble pops compared to Nvidia, OpenAI, Anthropic, Oracle etc. as they're largely staying out of the money circles between those companies.


And running loads long term profitable may require both lower power use as well as longer chip lifetimes - something associated with lower power use.


aka Google will have less of a pile of money than Nvidia will


Alphabet is the most profitable company in the world. For all the criticisms you can throw at Google, lacking a pile of money isn't one of them.


I think people are confusing the bubble popping with AI being over. When the dot-com bubble popped, it's not like internet infrastructure immediately became useless and worthless.


that's actually not all that true... a lot of fiber that had been laid went dark, or was never lit, and was hoarded by telecoms in an intentional supply constrained market in order to drive up the usage cost of what was lit.


If it was hoarded by anyone, then by definition not useless OR worthless. Also, you are currently on the internet if you're reading this, so the point kinda stands.


Are you saying that the internet business didn't grow a lot after the bubble popped?


And then they sold it to Google who lit it up.


Google uses TPUs for its internal AI work (training Gemini for example), which surely isn't decreasing in demand or usage as their portfolio and product footprint increases. So I have a feeling they'd be able to put those TPUs to good use?


Not sure if this matters for you or not, but my understanding (with some experiments) is that the "slicers" implicitly do a union. As in: you could have an STL with a bunch of overlapping blobs and the 3d printer slicing code just checks isInside -- which is effectively a union.

At least that's what I found when I was generating STLs in code.


Minor suggestion/request: would be great if you added a final STL file to the github repo of a working example. Might be easier for people to try if they can't get the python code running on Linux.

(I haven't tried yet. But I'd love to just send an STL to my printer to see how well it prints.)


Not sure why the government would bother to bail them out. I would imagine Google could take up most of the slack if OpenAI implodes.


I can see bailing out banks, you need to keep money flowing. Bailing out something like auto industry. Lot of employees with non-transferable skills and restructuring will be mess.

With AI? Well just sell the assets in bankruptcy. Models can be sold, hardware and infra can be sold, IP and brands too can be sold. Employees, they have mostly transferable skills and are mobile already.

Even if AI companies fall, other people can pick up the pieces if there is something valuable left. Market might be affected. But I don't think it is only domino.


> With AI? Well just sell the assets in bankruptcy.

Entire stock market will tank and a lot of very rich people will lose a lot of money. So government won't let that happen.


Wouldn't a bailout worthy event already lead to stock market crash? Basically market run would be triggered if there was something that warranted a bailout. So would there be much to bailout to still prop-up the market?


I think the stock market prices in the value of being "too big to fail".

So bailing out might actually cause the stock to appreciate because uncertainty about the possibility of bailout suddenly disappears in a positive manner.


You don't typically get to be a successful politician by being skeptical of future industry profits. I suspect the people who run our country tend to be credulous rubes in this regard. Sam Altman is many things, but he appears very good at spelling a narrative about the future of computation and the relative competitive risk of not aggressively pursuing it. Google simply has not taken this approach.


Does anyone believe this vision anymore though? It's obvious that the current tech is not the path to AGI


It's still a path to increasing work efficincy by replacing x% of workers with cheaper AIs. It's bot great but best chance we have of advancement.


This is not obvious to laypeople who don't browse hn.


Cause goverment loves them grifters and once they are bailed out they will gift things to the king personally.

Otherwise said, high level corruption that is not even hidden these days.


"oh the US cannot afford to lose the AI race to China, Mr President"


Hopefully Aaron Sorkin will write a screenplay about this sometime in the next decade.


im guessing it’s no more than two years away


Yes please. Or Jesse armstrong and Adam McKay


Wow, that's interesting.

I was a very early Evernote (paid) user. But they lost their way sometime after they became a unicorn, so I bailed out.

I had assumed, since they were bought, that it was just a way to squeeze money from existing users. I had no idea they were actually improving things.


I stopped using Evernote actively after they reduced a formatting bug for their exported notes from Important to Wishlist and then sold to Bending Spoons.

Bending Spoons not only fixed that particular bug, but added a lot of useful features from other tools like "Block based editing" from Notion.

They are actively improving the product in every way, and they record short monthly recap videos to talk about the improvements. They didn't milk and kill the product. It's an interesting watch.

For me, the ship has sailed unfortunately. I divided that Evernote corpus into two, and personal parts went to Notion and technical part carried to Obsidian, and converted to a digital garden.

I have no hard feelings for them, though. I wish them the best of luck.


I get the attraction of all these various online apps where you're supposed to be able to store everything in one place. But they're single points of failure. In spite of the downsides, I just use text notes and take pics of, e.g., conference slides, on my phone. But, honestly, I don't really refer back to the vast majority of that stuff anyway.


I like Evernote but it just isn’t worth $130 / year for me. Last year they had a sale for $50 (or was it $60) for a year and I paid for that. If I can’t renew at that I’ll have to figure out how to migrate to Obsidian.


Migrating to Obsidian looks to be very easy now: https://help.obsidian.md/import/evernote

When I converted many years ago it required 3rd party tools and was slightly more involved (but still totally worth it).


Two things I suspect I'll miss from Evernote is their web clipper and their OCR.

Last time I tried the Obsidian web clipper, it was pretty rough. It would drop images or include ads. I found the Evernote clipper to be pretty much flawless.

Evernote's OCR capabilities are also great. Somehow it's able to do a better job of recognizing my handwriting than even I can do sometimes. Last I checked, Obsidian isn't very good at this which is strange because the two big platforms — Windows and MacOS — both have excellent OCR APIs they could use for free.


Wait a sec — you're saying you'll take the time and trouble to "... figure out how to migrate to Obsidian" rather than pay the $70-$80 renewal premium over what you paid last year? Let's do a thought experiment. Suppose you spend a total of 3 hours from start to finish doing the migration. That's the equivalent of being paid $25/hour in lieu of paying the Evernote full price renewal as opposed to what you paid on sale last year. I have a feeling you would not consider that close to being what your time is worth nor to what you're paid in your day job.


You might be surprised to know that I also mow my lawn, I clean home, I cook sometimes, I do laundry, I drive myself to work, and I sometimes even watch TV, spend time on HN, or play video games.


Aside from the fact that such calculations aren't necessarily applicable anyway, it is incorrect because they would most likely have continued to use and have to pay for Evernote for more than just the one year.


Maybe, but you don't have to move that far down the salary scale for $25/hour post-tax to start looking fairly attractive. In the US, median weekly personal income is in the $1200 range, so assuming 40 hour work weeks, $30/hour, or a bit under $26/hour after federal income tax if you live in a state with 0% income tax.


I'm trying to imagine a product manager calling me to say, "Hi, we just bought this product you use, we're raising prices and firing the dev team. But hahahaha, you can't quit us, I have a spreadsheet here that says your time escaping our clutches will cost you more than paying the extortion fee to cover us buying the tool and the profit we need. Tough luck, but you have no logical alternative."

I'm not sure that my relationship with tools is so bloodless that it is only driven by dollars, cents, and minutes. I'm not sure I have to clench my teeth and write that product manager a cheque.


> Streaming was fun for a while, but as always these greedy execs are ruining it.

I've been doing a lot more digital purchasing. Like movies and TV shows. I know there is some risk to the services shutting down. But Disney's MoviesAnywhere mitigates that some.

I typically buy stuff when it is on sale. Generally a digital movie is (way) cheaper than a single ticket at a theater. And I've kinda built a decent sized library where I usually can find something to watch.

And, generall, my library is way better than Netflix at any given time. (Though I still have a couple(!) streaming subscriptions...)


> digital purchasing

That's an oxymoron if you can't have a local copy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: