Hacker Newsnew | past | comments | ask | show | jobs | submit | aborsy's commentslogin

Yeah, it would have been at best something like irrelevant oracle.

The world’s infrastructure is running on his Linux now!


But their hardware is also terrible. Their disk stations for consumers had 1G NICs until recently, and still underpowered CPUs. The sales had to decline for them to be convinced to upgrade to 2.5G in 2025. But then they removed an optional slot for 10G in 923+ model (they still would have made money from it, as it costs +$150), so when the industry moves to 10G, you can’t upgrade the component and should buy the whole unit. The construction is plastic.

I have a 920+, and it’s too slow, frequently becomes unresponsive when multiple tasks are run.

They lag, and need to be constantly forced to improve?


Min/Max pricing theory?

Selling 10 units at $10 profit is far far better than 100 units at $1.50 profit. Maybe even $2 per.

Why?

Because the more you sell, the more support, sales, and marketing staff you need. More warehouses, shipping logistics, office space, with everything from cleaners to workststions.

Min/Max theory is exceptionally old, but still valid.

So making a crappier product, with more profit per unit, yet having sales drop somewhat, can mean better profit overall.

There are endless ways to work out optimal pricing vs all of the above.

But... in the end, it was likely just pure, unbridled stupid running the show.


The economic notion is called marginal profitability. Better sales are a good thing if the marginal profit is positive, ie, each extra unit sold still increases the overall profit, so in your example it's still profitable if the new model brings $1.5 profit per unit, and you stop only when the marginal profit per unit turns negative.

In tech the model is often misleading, since the large investments to improve the product are not just a question of current profitability, but an existential need. Your existing product line is rapidly becoming obsolete and even if it's profitable today, it won't be for too long. History is full of cautionary tales of companies that hamstrung innovation to not compete against their cash cows, only to be slaughtered by their competition next sales season. One more to the pile.


Shouldn't you be pricing the increased cost of support/sales/marketing into your profit calculations?

I'm guessing you mean for the crappier product, and sure that's a consideration.

I haven't looked at them in years, but there are formulas for all of that. EG to help you work out if it makes sense.


> So making a crappier product, with more profit per unit, yet having sales drop somewhat, can mean better profit overall.

This will never work in a competitive market like for NAS. The only thing that will get you higher profit margins is a good reputation. If you're coasting by on your reputation, sales and customer experience matter. Less sales one quarter means less people to recommend your product in the next one, which is a downward spiral. A worse customer experience obviously is also a huge problem as it makes people less likely to recommend your product even if they bought it.

They went for a triple-whammy here from which they likely won't recover for years. They now have less customers, less people who are likely to recommend their product, and their reputation/trustworthiness is also stained long-term.

Crappier products at higher margins only works if you're a no-name brand anyways, have no competition, or have a fanatical customer base.


This seems to be the model that Broadcom is going with VMware.

I totally agree with you.

The appeal for me was the "it just works" factor. It's a compact unit and setup was easy. Every self-built solution would either be rather large (factor for me) and more difficult to set up. And I think, that's what has kept Synology alive for so long. It allows entry level users to get into the selfhosting game with the bare minimum you need, especially if transcoding (Plex/Jellyfin) is mentioned.

As an anecdote, I've had exactly this problem when buying my last NAS some time ago. It was DS920+, DS923+ vs. QNAP TS-464. The arguments for QNAP were exactly what you write. Newer chip, 2.5G NICs, PCIe Slot, no NVMe vendor lock-in. So I bought the QNAP unit. And returned it 5 days later, because the UI was that much hot garbage and I did not want to continue using it.

Lately, the UGreen NAS series looks very promising. I'm hearing only good things about their own system AND (except for the smallest 2-bay solution) you can install TrueNAS. It mostly sounds too good to be true. Compact, (rather) powerful and flexible with support for the own OS.

As the next player, with mixed feelings about support, the Minisforum N5 Units also look promising / near perfect. 3x M.2 for Boot+OS, 5 HDD slots and a PCIe low-profile expansion slot.


I sold my synology for an AOOStar WTR Max. It arrived with an issue (usb4 port didn't work) but replacement was quick and easy. So far, I'm rather happy. Really hesitated with Minisforum.

Surprising to read your take

Transcoding was the reason I moved away from Synology. The rest was fine, not great but ... Okay

But there was no way to improve transcoding performance. If a stream lagged, it would always lag. Hence I jumped ship and just made my own


I now have a mini pc next to my NAS, and leaving my NAS to only file storage chores. That said, I also am running NVidia Shield TV Pro boxes with Kodi for local media and largely don't have to worry about the encoding.

I gave up on transcoding and just recoded everything into the format the Apple TV with Infuse wants.

But my “NAS” is ex-lease enterprise server.


I often read that take. Yes, the J4125 was fine for a few easy / low effort transcodes, like 1080p to 720p for mobile streaming.

But I'm with you. The rest is fine, not great, but rather working well enough.


I bought an inexpensive used Mac Mini and attached a standard HDD USB3 enclosure to it with multiple drives. Works great for streaming to any network appliance I want to use.

I wish transcoding was available on my 1819+. (It isn't.)

There's some DIY chassis that are pretty small like Jonsbo n2, great since you can upgrade CPU later on. https://blog.briancmoses.com/2024/11/diy-nas-2025-edition.ht...

Ugreen, aoostar and terramaster are also good alternatives.


I've had a couple of Synology drives for many years (DS1520+, DS918+). They've always worked fine (still chugging away).

I have had terrible luck with Drobo.


Yep, I had two different models that had been running for about seven years each and had an excellent experience overall until Synology tried to change their drive policy.

I get all the points about EOL software and ancient hardware, but the fact of the matter is I treat it like an appliance and it works that way. I agree that having better transcoding would be nice. But my needs are not too sophisticated. I mostly just need the storage. In a world with 100+ gig LLM models, my Synology has suddenly become pretty critical.


They have way underpowered cpus compared to what you can get for the same money elsewhere. They're just a bad deal.

Hi there, I was looking to get a NAS that I can just install and not have to worry about maintenance too much and senility was at the top of the list. If not synology what would you suggest?

In my case, Synology has worked fine. Reliability is a big deal for non-backup RAID (not the same as "backup," but does the trick, 90% of the time).

It's entirely possible that their newer units are crappier than the old workhorses I have.

I don't use any of the fancier features that might require a beefier CPU. One of the units runs a surveillance station, and your choices for generic surveillance DVRs is fairly limited. Synology isn't perfect, but it works quite well, and isn't expensive. I have half a dozen types of cameras (I used to write ONVIF stuff). The Surveillance Station runs them all.


Synology's fine - even ideal - for that use case. If you want to run Docker containers, run apps for video conversion like Plex, etc, then you'd likely want to consider something with a beefier CPU. For an appliance NAS, Synology's really pretty great.

I was just mentioning personal experience. It wasn't even an opinion.

I would love to know what a "good deal" is. Seriously. It's about time for me to consider replacing them. Suggestions for a generic surveillance DVR would also be appreciated.

Thanks!


What kind of tasks?

I am not necessarily disagreeing with you but context is important. I've had 918+ and 923+ and the cpu has idled through all my years of NAS-oriented usage.

Originally I planned to also run light containers and servers on it, and for that I can see how one could run out of juice quickly. For that reason I changed my plan and offloaded compute to something better suited. But for NAS usage itself they seem plenty capable and stable (caveat - some people need source-transcoding of video and then some unfortunately tricky research is required as a more expensive / newer unit isn't automatically better if it doesn't have hardware capability).


A significant part of the prosumer NAS market isn’t running these for storage exclusively. They usually want a media server like Plex or Enby or Jellyfin at minimum and maybe a handful of other apps. It would be better to articulate this market demand as for low power application servers, not strictly storage appliances.

I used to like synology for that, but now I just want a NAS with NAS things on it that supports the latest technology.

As soon as my Synology dies I'm replacing it with Unifi. I don't want all that extra software with constant CVEs to patch.


Simplification is the key. My setup went from: Custom NAS hardware running vendor-provided OS and heavyweight media serving software -> Custom NAS hardware running TrueNAS + heavyweight media server -> Custom NAS hardware running Linux + NFS -> Old Junker Dell running Linux + NFS. You keep finding bells and whistles you just don't need and all they do is add complexity to your life.

Not OP, I went back and forth about having containers etc on my NAS. I can of course have a separate server to do it (and did that) but

a) it increases energy cost

b) accessing storage over smb/nfs is not as fast and can lead to performance issues.

c) in terms of workflow, I find that having all containers (I use rootless containers with podman as much as possible) running on the NAS that actually stores and manage the data to be simpler. So that means running plex/jellyfin, kometa, paperless-ngx, *arrs, immmich on the NAS and for that synology's cpu are not great.

In general, the most common requirements of prosumers with NAS is 2.5gbps and transcoding. Right now, none of Synology's offerings offer that.

But really the main reason I dislike synology is that SHR1 is vendor locked behind their proprietary btrfs modifications and so can only be accessed by a very old ubuntu...


You'll get much stronger CPUs from other brands at the same price.

Are there any other NASes out there that a) support ZFS/BTRFS, b) support different-sized drives in a single pool, and c) allow arbitrary in-place drive upgrades?

Last I checked, I believe I didn't find anything that satisfied all three. So DSM sits in a sweet spot, I think. Plus, plastic or not, Synology hardware just looks great.


I don't know why you got downvoted, you're right. Many models that are currently on sale as new models have CPUs that are 10 years old.

There must be more than that, another explanation, if they are slow. Ten year old CPUs were plenty fast already, far more than enough even, to power an NAS device.

My Windows 11 often takes many seconds to start some application (Sigil, Excel, whatever), and it sure isn't the fault of the CPU, even if it's "only" a laptop model (albeit a newish one, released December 2023, Intel Core Ultra 7 155H, 3800 (max 4800) Mhz, 16 Cores, 22 Logical Processors).

Whenever software feels slow as of the last 1+ decades, look at the software first and not the CPU as the culprit, unless you are really sure it's the workload and calculations.


You are correct that the software should perform better, but I don't think the average buyer understands this - they buy a new (and sometimes quite expensive) device, yet it feels sluggish for them, so they feel like they bought a bad product.

But even in the more business/enterprise segment you're getting screwed over. Let's go to the product selector here: https://www.synology.com/en-uk/products?product_line=rs_plus... and look at XS+/XS Series subtitled "High performance storage solutions for businesses, engineered for reliability." Let's pick the second choice, RS3621xs+. According to the Tweakers pricewatch (https://tweakers.net/pricewatch/1656552/synology-rackstation...) this thing went on sale the 8th of February 2021 (4 years ago). The specsheet says it has an Intel Xeon D-1541, let's look at what ARC (https://www.intel.com/content/www/us/en/products/sku/91199/i...) has to say about this CPU:

Marketing Status: Discontinued

Launch Date: Q4'15

Servicing Status: End of Servicing Updates

End of Servicing Updates Date: Saturday, December 31, 2022

I'll let you make your own conclusions if that's an OK purchase these days.


Who's out here getting service updates for their CPU?

Another factor related to speed is that, they didn’t allow using NVMe slots for storage pool until recently for new models (in 920+ still you can’t do that; even if they allowed it, the limited PCI lanes of that CPU would limit the throughput). So a container’s database has to be stored in mechanical HDDs. Again other companies moved on, and I remember there were a lot of community dissatisfaction and hacks, until they improved the situation.

Their hardware is limited already, and they also artificially limit it further by software.

They changed course now, and allow using any HDD. Will DSM display all relevant SMART attributes? We will see!


> Ten year old CPUs were plenty fast already,

That depends on the CPU… Some are optimised for power consumption not performance, and on top of that will end up thermally throttled as they are often in small boxes with only passive cooling.

A cheap or intentionally low-power Arm SoC from back then is not going to perform nearly as well as a good or more performance oriented Arm SoC (or equivalent x86/a64 chip) from back then. They might not cope well with 2.5Gb networking unless the NICs support offloading, and if they are cheaping out on CPUs they might not have high-spec network controller chips either. And that is before considering that some are talking to the NAS via a VPN endpoint running on the NAS so there is the CPU load of that on top.

For sort-of-relevant anecdata: my home router ran on a Pi400 for a couple of years (the old device developed issues, the Pi400 was sat waiting for a task so got given a USB NIC and given that task), but got replaced when I upgraded to full-fibre connection because its CPU was a bottleneck at those speeds just for basic routing tasks (IIRC the limit was somewhere around 250Mbit/s). Some of the bottleneck I experienced would be the CPU load of servicing the USB NIC, not just the routing, of course.

> far more than enough even, to power an NAS device.

People are using these for much more than just network attached storage, and they are sold as being capable of the extra so it isn't like people are being entirely unreasonable in their expectations. PiHole, VPN servers, full media servers (doing much more work than just serving the stored data), etc.

> There must be more than that, another explanation

Most likely this too. Small memory. Slow memory. Old SoC (or individual controllers) with slow interconnect between processing cores and IO controllers. There could be a collection of bottlenecks to run into as soon as you try to do more than just serve plain files at ~1Gbit speeds.


On a DS920+ users will run various containers, Plex/Jellyfin, PiHole, etc. The Celeron J4125 CPU (still used in 2025 on the 2 bay DS225+) is slow when used with the stuff most users would like to use on a NAS today, and the software runs from the HDDs only. Every other equivalent modern NAS is on N100 and can use the M.2 drives for storage just like the HDDs, which makes them significantly more capable.

The Synology DS925+ for example does not have GPU encoding. For an expensive prosumer-positioned NAS this is crazy. They can't let us have both 2.5gb NICs and a GPU.

That's fine, some of those also have kernels that are EOL for almost 10 years.

I have a 10G NIC in my DS923+.

I agree with the rest, though.


This kept me from buying one too. One of the models I considered would make me choose between an M.2 cache OR a 10gbe nic. I didn't know they are plastic now either. It's a shame, I really want to like them. I also heard it some "bootleg" OS you could install over DSM but not sure what it's called. Synology were trying to silence it iirc

10G. You're cute.

My institution still has 100M everywhere. I'd love 1G.


What are good static site generators (SSGs) to create a simple minimalist academic-like website?

It seems Astro or Hugo (potentially with Tailwind to customize CSS) are good options. Astro felt more complex than needed, on the other hand, writing html/css manually doesn’t work well either (predefined layouts and easy to use style customization options would be needed). I couldn’t find a suitable Astro theme either (most themes seem to be made for companies, or needlessly fancy, and a few for individual blogs).

In the process of figuring this out, I was surprised how complicated designing a website has become. There are so many frameworks, components, integrations and significant dependancies that I wondered how web developers keep up in this ecosystem.


Quarto (https://quarto.org/) is well regarded in academic publishing and supports website projects: https://quarto.org/docs/websites/.

Thanks for mentioning it, it has relevant templates, including an interesting one for a class.

Ask LLM to write a 50 LOC program to convert directory of markdown files to directory of html files using pandoc.

This actually might be the "correct" solution these days. Pre-LLMs I wrote myself a python script to that converts markdown to html with custom headers and footers and rest of the junk, but today I would for sure give LLM a go first and see if what it produces.

Having done a fair amount of LLM webpage generation, it does an okay job, but repeats a lot of "best practices" that are a few years out of date. Specifically around meta tags.

If I were doing this, I would have it do Deep Research or similar on the most minimal, efficient set of tags to include in the `head` tag in late 2025.

It's possible you can get away with a bunch of .md files, but you will need a file to store site-wide metadata and/or YAML "front matter" in the MD to define page-specific variables.


> I was surprised how complicated designing a website has become.

Sounds like you would enjoy https://mastrojs.github.io – a _minimal_ Astro alternative.


It is a natural entropy of web system.

We could not hope that everything would remain simple. Thanks for all the open standards and frameworks that we don't have to license.

And yes, it is a paralysis of choice.


I use github pages with jekyll. It took a bit of time to make a custom theme and setup the custom domain, but its really simple now to just add markdown file and push to github.

https://github.com/loldot/loldot.github.io


If you really believe things were easier in the past, then just do what you did back then. It'll all still work.

Maybe look into Eleventy [0] or Zola [1]. Both are relatively recent developments and have a skilled and forward thinking userbase.

[0] https://11ty.dev/

[1] https://getzola.org/


You can find CSS only themes or Tailwind themes to use with Astro or Hugo.

I personally liked Astro's approach to "Components", less glue more "just writing html/md". That is of course a learning curve on its own.


Not that you need more choices, but franklin.jl hit my sweet spot for “handles math and code inline well, otherwise is clean and gets out of my way”

No need for a SSG. Just copy something like this: https://thebestmotherfucking.website and make every page by hand like you are some 80 year old greybeard.

For what most people are doing with static site generators (blogging) going "raw" HTML is honestly not a bad choice. Most usage of Markdown is just to avoid writing tags. It's not a whole lot more work to write `<em>foo</em>` instead of `_foo_`.

HTML was meant to be hand-authored, not a compilation target. It's like if people forgot you can still cook rice without a rice cooker.


Yeah, I got fed up with all the dependencies and processes and workflows and configurations and just manually made my site like it's 2004, just hand-written HTML+CSS+JS. As long as it's a small personal site, this is very robust and there is no constant churn and deprecations and change for the sake of change.

I have a couple of 100% handwritten sites as well. The challenge is when they go beyond about a dozen pages. If you want to change something across pages that isn't a find & replace, like adding a meta tag, it starts to be a drag. I'm going to be converting one of those sites to Swift Publish soon, as it's a low-dependency framework that hasn't been updated in years.

Sure, but many people just want to have a super basic academic site with like 1 or 2 pages ("simple minimalist academic-like website" as it was phrased upthread), and then start to fiddle with a million frameworks and the compilation chain is broken in a few months when you want to add your new publication to the site, and you have to dig through versions and dependencies and environments and CI/CD and whatnot. When you could just type up your site in one afternoon by hand and it will keep working indefinitely.

I experimented with customizing a flat page by writing HTML and CSS manually, with some help from LLMs. Using this approach, it’s possible to create something resembling the website you linked.

The result is decent, though the organization of the long page could be improved. In my view, a good personal website would include space for a photo, name, and affiliation, along with a simple menu for sections like biography, publication, projects, with a professional style (font and colors), but no more complex. The publication page could have a style for presenting papers in a clean well-formatted list. That may require a theme that comes with a layout and a degree of customization and novelty.

A lot of themes I looked into are not quite professional.


if you've been ai-pilled, just have Claude generate it for you from the .MD files you wrote by hand

This line of conversation is distracting people from the embarrassing state of the WWW.

If anything good is left outside the temples of Facebook, etc, it's not much, and I'm embarrassed.


There are many threads here on HN about that as well. But this thread here is about CSS styling.

Canada Post is owned by the government of Canada, and is heavily unionized.

France’s problems are spreading to Canada?

Is this inevitable for any similarly structured entity?


> France’s problems are spreading to Canada?

Yes, for the past few hundred years :) [0]. But more seriously, strikes are much more common in Canada than in the US (and it's been this way for as long as I can remember).

> Is this inevitable for any similarly structured entity?

Not really. This is a pretty common structure in Canada [1], and only a few of the organizations in that list are known for striking. It's very province-dependent though—Quebec is stereotypically associated with frequent striking, while it's fairly rare in Alberta.

[0]: https://en.wikipedia.org/wiki/New_France

[1]: https://en.wikipedia.org/wiki/Crown_corporation#List_of_fede...


The website doesn’t work on my phone.

I use magic wormhole when I first install a computer to send an SSH key. At that point, I have ssh. Also if you use a mesh VPN, your devices are already connected and there are several ways to transfer files.

I like to use it more. What is your use case for this?

BTW, I haven’t found a good iOS app.


FYI, there's a subcommand just for this use case: `wormhole ssh invite` / `accept`, which will read the SSH pubkey on one end and append it to authorized_keys on the other.

I think you should create a new ssh key for every computer so you only have to share the public key .

If you’re launching machines from a prebaked image or template, it may not be possible to inject/generate something unique for each one, or doing so is finicky to the point that using something like wormhole may be simpler than fighting with your provisioning tool.

SMB is great for LAN, but its performance over internet is poor. It remains SFTP and WebDAV in that case. SFTP would be my choice, if there is client support.

I suspect that NFS over Internet is also not the most brilliant idea; I assumed the LAN setting.

Is there an easy setup so that this could be done with VPS relaying the traffic but not decrypting it?

So far, I’m thinking of something like FRP to proxy incoming UDP from internet to the private network. I’m not sure if that works if only outgoing 443 is open at private network.


Never had issues with Amazon, frequently shop.

Local shops are scams in my area: expensive, don’t take back items, limited selections, don’t have the right items and have to settle for less, time consuming, bad customer service, can have language requirements for some people at the time of disputes (but not when selling), the reviews of items are not available and a lot of them turn out to be bad, long waiting lines … Your complaints about online shopping remain with local shopping.

The benefits of online shopping for consumers are plenty and evident. The majority of products I bought online that I researched carefully have been solid.


> Local shops are scams in my area: expensive, don’t take back items, limited selections, don’t have the right items and have to settle for less, time consuming…

None of these are scams, you're misusing the word. You mean a worse shopping experience, not defrauding the customer.


I live in a country with the same problems, and everything here is so much more expensive, it absolutely deserves to be called a scam. Stuff such as books and electronics can easily be 2x as expensive locally as abroad.

Not a scam a local business who can't compete on price with a global marketplace.

What they offer is the ability to put the product in your hands immediately, they hire your friends /family and pay taxes to your city and national government allowing services like police, fire and hospitals to be funded.

For many cheap prices, longer wait times and not supporting your local area makes more sense to them. But then they complain about the local services and not realize their decision to not support them had an affect.


Still not a scam. Stop damaging the language.

If you host your email on a VPS, you might as well let an email provider manage it all for you. In both cases, a company has access to your emails. What you have achieved is doing the dirty job of administration for the company, while not getting the privacy benefits of self hosting.

If you host it at home, can you endure uptime?


This argument is absurd.

You have a trust relationship with your VPS provider. Yes, they can access it, if they want to. The difference is that with a VPS you have contractual privacy and with e.g. Gmail they outright tell you that they scan your emails. So it is a big difference.

With everything, it's question of budget and what risks you accept.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: