Hacker Newsnew | past | comments | ask | show | jobs | submit | treve's commentslogin

Recycle is definitely grammatically correct. It's just a different word for re-use.

I thought recycle was the breaking down of the device into its constituents (mostly to recover precious metals or other base materials). In contrast, reuse is where the device is kept mostly intact and used for other purposes such as this. Just breaking things down requires considerable amounts of energy through the sheer logistics of it relative to just reusing it.

There is quite a bit of room between reusing something for its intended purpose and recycling something at the material level. TFA is at neither extreme - the phone is kept intact but it's employed for a much more limited purpose that only really uses some of the phone's components.

Ironically, this is semantics, not grammar.

How is that ironic? Since the thread is about definitions, I am surprised that we are in a situation where grammar is expected and definitions not

SMS 2FA stops enough would-be criminals and checks the compliance box. They don't lose enough money to sophisticated thefts to do something better.

My REST client emits this to console.log out of the box, and it's been really useful https://github.com/badgateway/ketting

It's nice when tooling builds this sort of stuff in, because it also encourages APIs to implement it.


Web developers should be forced to use hardware that's roughly at the 10th percentile in performance of their user base, not the 90th. Alternatively, make performance a WCAG concern.

I don't think this would help, if a site or SPA performs terribly on a high end machine, the only conclusion I can draw is performance testing isn't tested or validated at all.

Chrome Dev tools, and hopefully others, have a performance monitor option that lets you throttle the CPU and throttle the network. It should be plenty possible to test performance of sites on simulated 10th percentile systems, but this just seems low priority.

Absolutely, and this trend didn't start with the current AI boom. It started getting tough for people around ~2017 (with some exceptions in between). Before that you could likely get a job right out of a boot camp. Supply now has far outweighed demand on the junior level.

Nothing new. Circa the great recession, most of my fellow grads at a top-5 engineering school could not get jobs. Out of my friends, most of whom had internships, co-ops, and top ~10% of class, it mostly took us 6 months or so to secure a job as all our co-ops/internships cut off hiring before graduation. Many people did stuff like drive forklifts or work at walmart for awhile. Another friend of mine with a dual engineering major summa-cum-laude worked sorting screws in a screw factory, a position normally occupied by people in that town with serious mental handicaps.

The data in the paper shows a clear negative turning point for junior workers in late 2022.

On Windows, doesn't this technically mean OP is running Linux inside a Linux VM inside Windows? From what I understand Docker is Linux tech and to use it anywhere else a (small) Linux VM is required. If true, I would just dispense with the extra layer and just run a Linux VM. Not to discourage experimentation though!

Almost.

For one thing, Docker is not really "Linux inside Linux". It uses Linux kernel features to isolate the processes inside a container from those outside. But there is only one Linux kernel which is shared by both the container and its host (within the Linux VM, in this case).

For another, running Linux containers in a Linux VM on Windows is one (common) way that Docker can work. But it also supports running Windows containers on Windows, and in that case, the Windows kernel is shared just like in the Linux case. So Docker is not exactly "Linux tech".


I think GP is likely referring to Docker Desktop, which is probably the most common way to use Docker on Windows.

Running Linux containers using Docker Desktop has a small Linux VM in which the containers are run and then Docker does some mucking about to integrate that better with the Windows host OS.


I thought docker only supports windows as a host if you enable wsl, in which case you're running on hyper v and Linux kernel as part of wsl2, so absolutely Linux tech on a Linux vm on Windows... Am I wrong?

You are. You can run Docker for Windows, and run Windows binaries in reasonably isolated containers, without involving Linux at all [1]. Much like you run Linux containers on Linux without involving Windows.

It's Docker Desktop what assumes WSL; Docker engine does not. Also, you seem to need Windows Server; IDK if it can be made to work on a Pro version.

[1]: https://learn.microsoft.com/en-us/virtualization/windowscont...


Docker Desktop defaults to WSL2 but it has no assumptions whatsoever. You can run it with HyperV

Docker supports either hyper-v, or wsl2 as a host for the Linux kernel - they generally push people towards wsl2. I vaguely recall wsl2 uses a subset of hyper-v the name of which escapes me atm.

You are. Docker Desktop supports two different container platforms: usual Linux ones and Windows Containers.

With the former a Linux kernel is required. You have two options: using WSL2 and benefiting from all the optimizations and integrations that Microsoft made, or running a full Hyper-V VM that gives absolute control and isolation from rest of the system.

For the latter, you need a Pro license and need to enable Containers feature (deployment requires more expensive Server licenses). Then you can run slimmed down Windows images like "nano server" which doesn't have GUI APIs.


Can he install Wine in the Docker container to run Windows games from it?

Steam and it's remote play options seem more enticing to set up for me.

Isn’t this the case on macOS too?

I desperately wish I could run docker properly (CLI) on the Mac rather than use docker desktop, and while we are making a dream list, can I just run Ubuntu on the Mac mini?


I’ve been using colima for cli docker on my arm mac. It’s pretty straightfirward using homebrew.

Colima is great. However, in the upcoming macOS 26 Tahoe, and mostly in macOS 15 Sequoia, Apple is beginning to provide a first-party solution:

https://github.com/apple/container

I've been experimenting with it in macOS 15, and I was able to replace Colima entirely for my purposes. Running container images right off of Docker Hub, without Docker / Podman / etc.

(And yes, it is using a small Linux VM run under Apple's HyperKit.)


I ran into various issues I think, but my main objective was running a full k3s cluster this way, reckon this is achievable with full networking support now? Also if I already had colima setup, does new apple container provide any benefits beyond just being made by apple?

Try Orb docker. It is fast. It ha a Kubernetes cluster feature.

This thread is amazing - thank you all.

I’m surprised I didn’t stumble into any of these options, I searched and didn’t find.


It might not be Ubuntu but Asahi Linux runs Fedora pretty well on M2 Pro and older Apple Silicon Mac Minis: https://asahilinux.org/fedora/#device-support


No, WSL2 does not run "inside Windows", but on the "Virtual Machine Platform", a sort of mini hyper-v.

Sup dawg, I heard you like OSes.

It's still an interesting post, because if true I'd still be curious how you'd get 20 million people to load anything.

But the title here is totally misleading because it sure sounds like someone took control of 9% of the ipv4 address space but the actual post starts with context.


I would guess a WordPress plugin or something.

20 million is a lot, but if you look at geoip, they are around the whole world; I took 3 random latest IPs and I saw Vietnam, Brazil and Angola. So it's not that much when it's worldwide.

But it suggests it's not a geographically limited website. If it's through a website. It's probably not a ad buy. (Who would burn money on that...)

However the requests are literally every second. So it's something very popular. (Or a bot and they are somehow faking the source address...)


> Vietnam, Brazil and Angola

Curiously, these are some of the top countries I see when analyzing traffic from malicious scraping bots that disguise themselves as old Chrome versions on my websites.

So it's possible that one of those botnet-ish residential proxy services is being used here. The ones that use things like compromised browser extensions to turn unknowing users into exit nodes.

Edit: Yep, it's residential proxies, someone on the linked page mentioned a website where you can look up the IPs and all of them come up as proxies.


You can get 100 million people to load the 1x1 by adding it using javascript to an adsense ad you publish on Google...

The number of times my browser has been hijacked from their ad network is numerous.

Odds are, the culprit owns some IP that is running on 20M devices. Whether it's a mobile game. A bot net. An ad. Or some other script/service that allows other machines to make the request on his/her behalf.


I find this really interesting, I can see a few different ideas on GitHub to claim IPs, but I don't see any of those reaching that scale.

https://github.com/search?q=ipv4.games%2Fclaim&type=code&p=1

While running ads is definitely a possibility, reaching 9% of all available IPs sounds like a crazy expensive campaign. I don't know what the ratio of people to public IP is but I doubt it's one.


20 million unique users is not that much. I don't understand the claim that this constitutes 9% of all IP addresses. It doesn't. There are about 4 billion public IPv4 address. 9% of that would be closer to 300 million.


You're right, like others said in the comments the 9% in the comments is from total active hosts tracked by Censys (~231 million). But I still think it's challenging to have that much reach and unlikely to be an ad campaign. Using numbers from the website bellow the cost of getting 20 million impressions would be around $43,200 on the low-end for YouTube ads and can be much higher on different platforms. That is also assuming perfect efficiency were you we have exactly one impression per IP which is unlikely to be the case.

https://www.guptamedia.com/social-media-ads-cost


Is it reasonable to assume these aren’t 100% static IP addresses? If so, maybe there’s some double counting going on.


The commenters on the linked post mention loading the pixel image embedded in an advertisement campaign.

This would make it possible to have thousands of impressions for relatively low amounts of money.


If you run some random mid-sized web page with ~2 million monthly "unique" (by IP) visitors you'll get there very quickly.

Maybe IoT software, though I wonder how they are doing the NAT busting if it's behind a router.


One correction that I can't leave on the actual article (subscriber only!) is that I'm certain multi-screen support worked on Windows 98. Excellent article as usual though!


This article is probably the first time I 'get' why OS/2 was seen as the future and Windows 3 as a stop-gap, even without the GUI. The OS/2 GUI never really blew me away and every early non-GUI versions of OS/2 are mentioned it always seemed a bit dismissive.

But seeing it laid out as just the multi-tasking kernel that it is it seems more obvious now as a major foundational upgrade of MS-DOS.

Great read!


Raspberry Pi OS is a derivative and not straight up debian. It's not a released yet. A beta exists and looks like this one will support an in-place update


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: