Hacker News new | past | comments | ask | show | jobs | submit | antranigv's comments login

FreeBSD has 3 different firewalls, not 3 different interfaces to the same firewall. Each firewall has its own purpose. IPF is lightweight, pf has a nice UI/UX, ipfw is very integrated into the system.

More importantly, doing a simple kldstat would tell you which firewall is running. On Ubuntu (as an example) I have no idea if I should be using nftables, iptables or if ufw is working or not.


OVAs are basically ZIP files with some XML. If you want, you can convert an OVA to RAW image or VMDK or whatever the latest fancy format is, and bhyve can boot that for you. Better to use Raw.

bhyve, unlike other "famous" hypervisors is pretty stable, has good enough virtualized drivers (altho I'm sure Oxide has made it better) and can boot a VM with 1.5 TB of RAM and 240 vCPU[1]. Something I was not able to do with anything other than bhyve.

I know this is HackerNews, so I have to say it, marketing != engineering. Just because the FreeBSD project's marketing suck, doesn't mean engineering is bad. usually it better than the mainstream ones.

1: https://antranigv.am/posts/2023/10/bhyve-cpu-allocation-256/


There's two parts to the question. There's the file itself and the underlying hardware. The wiki is pretty light on details, does it actually support emulating the same hardware as vmware? I'd assume no to the vmxnet devices but Intel E1000? Adaptec SCSI adapters? Similar USB and VGA?

A lot of the vendor-provided OVAs cut out a bunch of hardware support with the assumption that they only need to support vmware emulated hardware.


> is that a defect in upstream, or is the local sh not completely posix compatible? AFAIK it's an issue with upstream. Just like most open-source projects, there is Linuxism/Bashism in there.


Oh yes, whilst Docker was created to help developers to think operationally, it (ironically) ended up helping Developers to not think about operations at all.


author here!

I'm not against the "concept" of systemd, I think the BSDs need a "system" layer as well, just not systemd. The ideas are amazing, the implementation is the problem.


Hi there! author of the post here!

I will answer most of the questions below :)

1) 3 days to setup macOS? Yes, it took me at least 3 days, keep in mind that a setup is not just installing software, it's also dotfiles, shell environment, automout (I use NFS a lot), PGP/GPG-alike keychains, the OS keychain, Firewall (pf in my case), privacy settings, company-related software, etc. So yes, it takes time, which I am okay with. My problem with macOS is the fact that updating/upgrading the system crashes a lot of configuration.

2) Why FreeBSD? Because I love it :) my company's product is based on FreeBSD, my servers are FreeBSD, my operating system of choice for teaching is FreeBSD. The handbook is there, all man pages are well written, pkg is easy to use, it's a whole system. Also: ZFS and DTrace makes your life easier. Sure, I can have ZFS on Linux and eBPF, but why learn a new technology when DTrace is rock-solid. FreeBSD is not "just" an OS, it's a complete self-hosted development ecosystem.

3) WiFi? Yes, WiFi is not the best, but not everyone needs 100Mbps connection. I have a wired connection at home to use when streaming movies to my PS4 (also a FreeBSD-based system), but other than that, it's fine. I will still donate every year so the devs improve it.

Apologies for the bad English, it's not my native language.

Thanks for posting and reading!


> 3 days to setup macOS? Yes, it took me at least 3 days, keep in mind that a setup is not just installing software, it's also dotfiles, shell environment, automout (I use NFS a lot), PGP/GPG-alike keychains, the OS keychain, Firewall (pf in my case), privacy settings, company-related software, etc. So yes, it takes time, which I am okay with. My problem with macOS is the fact that updating/upgrading the system crashes a lot of configuration.

I do get frustrated when other people jump in to in effect say "jeeze, three days, what are you doing wrong?"

I sent my work Macbook into Apple for a keyboard replacement, which naturally means they have to wipe the SSD, as one does with keyboard replacements. Setting it up again meant replicating three years of cruft that I had long since forgotten about. Its been a month since I did this, and I'm still not up to the level it was before.

Password manager, check. Both the native 1Password and browser extensions. Speaking of browsers, need to install both Firefox and Chrome for testing. Brew? Ok. AWS, gotta configure new access credentials there, now lets install the aws-cli, oh its not available in a package manager, cool. Node, Go, Rust, Elixir, ok now maybe my git repositories? Oh, git isn't installed, lets install xcode, and there's a system update, that'll take about 25 minutes. Didn't I have a command to quickly switch kubernetes contexts? Lets see if I snippeted that somewhere, actually I guess i need to install eksctl and kubectl now. Don't forget email sign-in, calendar sign-in, gotta install slack, iterm, VSCode, jeeze I remember VSCode being a lot more productive, yup I'm missing about twenty extensions.

This stuff is really, really hard to automate; not because its technically hard to automate, though in some cases it is, but its shit I do, like, once every three years. No one automates things they do three times a decade. Cloud or local server system image backups can help, but I'm not giving Apple a full system image for Time Machine to use, there's too much sensitive data on this machine. Its just hard! And that's ok.


Can I make a suggestion?

1) Before sending your Mac in for repair, use Carbon Copy Cloner [1] or SuperDuper! [2] to make a clone of your system drive to a spare SSD.

2) When your Mac is returned to you, if the system drive has been wiped, then use the same software to restore your backup.

Both these programs are free (gratis) for the described use, and have a reputation for reliability. The spare SSD drive will cost about $80 (how much is your time worth?).

[1] https://bombich.com [2] https://www.shirt-pocket.com/SuperDuper/SuperDuperDescriptio...


I'll definitely do something like this the next time. This one just caught me by surprise; I should have done the research, or listened to him when he asked if I'd backed up my data (they always ask that, even if there's little chance of an SSD wipe), but apparently the SSD encryption module (touchid) and keyboard are all the same unit, so replacing the known defective 2016-2018 keyboards wipes everything. True world class engineering from Apple, but what can you do, there's plenty of blame on my side as well.


is it really that hard to automate things on macos? I would have thought you could put most of the install instructions for everything you listed into a bash script? is macos locked down in some way?

why is logging into things an issue if you have a password manager to auto fill things?

on linux I just make a backup of the firefox folder so I don't have to reinstall the extensions on my new computer. all the settings files I want to keep are kept in syncthing folder then i have a bash script to create softlinks where those settings are supposed to be.

doing all that manually every 3 years would be my idea of "really, really hard". with a script you just add the instructions once and then you can keep reusing it without any effort


I admire people who write and publish in non-native languages, I don't have the guts to publish in my bad German.

It's clear you're non-native, but it's not "bad English". :) Thanks for sharing it with us! I like hearing about the range of experiences that highly technical people have with macOS. I'm still trying to use it as my daily driver, even though it takes a few days of setup (compared to the old days of building a preconfigured image), because my alternate option (Linux, for me) also takes a similar amount of time. You're right about Windows being a trash fire.

PS: "outside of the box" is the end of the idiom referring to "thinking outside of the box" (thinking differently about a problem). I think the one you meant to use in your article was "out of the box", which means the first experience with a product when it's opened or unwrapped (think: "taken out of the box"). I hadn't even noticed how similar these two are until today.


Hi, curious about battery life on your laptop. My impression is that many devs run Windows or Mac for their good power management.


I mean, I've built a Kubernetes-y thing for my own needs, mostly "scheduling" LXC and FreeBSD Jails, and it needs hella less than 256MB of RAM and it's around 20MBs. Maybe I should clean the code and publish it.


That's literally not Kubernetes, though. Does it support custom resource definitions? Services? Deployments? ConfigMaps? Multiple containers on a single host, i.e. Pods?

Your project does sound very cool but I don't think provides a fair basis of comparison re: size and RAM usage.


very good question! source definition -> no. services -> yes. deployments -> yes. configmaps -> no. multiple containers on single host -> yes. I should add the most common features I think. I also haven't integrated it with any reverse-proxy service yet, but will try that too.


The site has been updated to reflect that the nodes only need 512MB of RAM to run K3s _and Kubernetes workloads_. K3s itself doesn't consume 512MB of RAM.


For people who are going to ask "Do people still use pascal?", long story short-> yes they do, and Free Pascal Compiler is not only the traditional Pascal that we used to lean in school, it's also an Object Pascal compiler, and has many many libraries for modern programming :)) I started learning programming with Pascal when I was a child, it helped me to understand many things, lately I had a look at Python, which I love it a lot, but when I took a look back at Pascal (Object Pascal this time) I understood that everyone should start programming with Object Pascal and learn about the basics (then use what ever you want, .NET, Java, C++, Ada, etc). BUT I found out about this amazing language Oberon-2, and I should say that in schools we should move not to Python/Java, but to Oberon-2 (it's again a Wirthian language). Sorry for my bad English, just wanted to share my view, and I'd like also to say that (IMHO) making GUI programs using Lazarus is much easier than, well, anything else :))


If you want to dive into the world of Oberon you should check the PDFs available at Niklaus Wirth web site and AOS.

http://www.ethoberon.ethz.ch/books.html

http://www.ocp.inf.ethz.ch/wiki/Documentation/Front

You can also have a look how the OS used at ETHZ were like:

http://progtools.org/article.php?name=oberon&section=compile...

We already had it quite good on the PC with Turbo Pascal and later Delphi, which is why I find a bit sad that it took us almost 20 years to get .NET Native, instead of it being the default toolchain from day one.


To be fair, most of the benchmarks I've seen from end-users don't have .NET Native outperforming the CLR except in huge number crunching situations[1,2]. As one would imagine in those scenarios using your standard stack of C++ with the GNU GSL[3] (which includes your BLAS, GMP, PDE, and standard lin. alg. stuff) is still pretty much king with MPL (based on MPI) with some (often modest, sometimes incredible OpenCL gains). In numerical methods, Fortran w/ a modern Intel MKL and MPI still reign for SIMD. I guess what I'm saying is, for your day to day Rapid App Dev (which Borland Delphi + the VCL was absolutely king at up until MS put out Lightswitch) you weren't really crunching away. Likewise, especially with modern JIT'ing if you _do_ crunch with a language that hit's the CLR hard, you still get performance similar to .NET Native (contrary to MS claims of "performance similar to C++). I've been a MS advocate for ages and admire what they've been doing since ~2005 at MSR all the way to the production cash cows[MS SQL Standard gives you a ~LOT~ of bang for your buck with SSDT/SSAS/SSRS] (check my 8 year post history - when everyone was rails rails! iOS! MBP!! I was silently C#3'ing along); I say all this as a long preface to pose this question - what exactly did .NET Native bring that presumably made it catch up to TP/Delphi?

[1] http://code4k.blogspot.com/2014/06/micro-benchmarking-net-na... [2] https://dzone.com/articles/net-native-performance-and [3] http://www.gnu.org/software/gsl/


30 years ago I was discussing these arguments between those using Assembly and those that dared to use C, Pascal, Basic and Modula-2 in home computers.

20 years ago I was discussing these arguments between those using C, Turbo Pascal and those that dared to use C++, Object Pascal features in Turbo Pascal/Delphi.

So I still envision the day managed languages get to the next discussion cycle.

> what exactly did .NET Native bring that presumably made it catch up to TP/Delphi?

Static binaries, integration with the Visual C++ backed (it is not used in JIT/RyuJIT/NGEN).

Pushing C# for more use cases where they still make use of C++. If you look at the C# 7 roadmap there are still more features coming into that direction.

One of the reasons they made .NET Native, which came from Singularity actually, was that the C++/CX uptake is not take big. Most developers only reach out to C++/CX instead of C#/VB.NET for the APIs not exposed to .NET, like DirectX.

I was a big fan of TP and eventually made the jump to C++, which is also very dear to my heart.

But I would rather have a full stack language that offers me the safety of Algol languages, without C underpinnings.

With .NET Native and the integration of System C# features into .NET, it is becoming quite appealing.


I apologize if I came off snarky; my intention wasn't to be argumentative. All of my questions were posed with genuine curiosity rather than with the passive-aggressive tone I may have unintentionally conveyed. I certainly am not in that group of language warriors. I don't have any allegiances to languages, platforms, styles of languages, or even architectures (I'll use a Harvard based over von Neumann if it gets the job done better).

Most of the people I know choosing C++ over C# are using it for the ability to get soft-real-time guarantees w/r/t performance. E.g. the gaming development guys have a "this next framebuffer ~must~ be done within 16.6 or 33.3 ms" or "I need a deterministic 'buy/sell/hold/not-enough-information-to-say-with-a-high-confidence-factor, defer-to-safety-net-policy'". At CppCon '15 most of the people I spoke to fell into a similar camp. I don't think I even spoke with one engineer who's day job involved using C++ because <insert CLR language here> failed to have interop capabilities w/ DX or what not. Of course, those who attend conferences will bias in a certain way so I'm not making an appeal to that demographic as in any way representative of the C++ user-body.

My assertion was simply : those who are using C++ are using it out of the benefits of predictable behavior. They can't afford that situaiton of: your GC won't decide this is the appropriate time to move all those marked as Gen0->1 over and oh by the way they're all a few bits big and fragmented so no pre-fetching help for you. What makes those guys different from the guys who were ASM guys who refused to use C back in the day is that the compiler eventually outperformed the hand-rolled stuff - but even then the ASM guys were steadfast. You make some real interesting points though and you're right - I haven't looked at the C#7 roadmap. Hell, to be honest I haven't even written enough code to take advantage of Roslyn's introspective capabilities. Thanks though, insightful comment as always. (Please keep on posting and don't disappear like the good posters of yesteryear (edw519, grellas, etc)).


(Please keep on posting and don't disappear like the good posters of yesteryear (edw519, grellas, etc))

Thanks, Andrew. I didn't disappear. Just spending more time building and less time posting.


"30 years ago I was discussing these arguments between those using Assembly and those that dared to use C, Pascal, Basic and Modula-2 in home computers.

20 years ago I was discussing these arguments between those using C, Turbo Pascal and those that dared to use C++, Object Pascal features in Turbo Pascal/Delphi."

Nice take on it. :)

"But I would rather have a full stack language that offers me the safety of Algol languages, without C underpinnings."

Exactly. That's always a Pascal advantage. You can be clear on exactly what it does and it's probably safe by default.


Oberon was actually used in a couple of German universities, not just because of the language, but because of the whole environment (which feels a bit like SmallTalk for bit-twiddlers and -shavers).

Having said that, Oberon is a pretty strict and bare language compared to others, e.g. in its current standard, the old structured programming rule of one return statement per function is strictly enforced. Whether that's an advantage for teaching or not, I can't really say.


Sometimes I think strict and bare is better, especially for learning. One of the reasons I like C so much is that it makes doing complicated things really hard, so you end up solving problems in simple ways instead. Bare languages really force you to understand and think through your problem.


First they killed the chat by bringing up hangouts, now killing the idea of email by "inbox".

Google, killing the net since lately (:

personal mail server for ever! ;)


Lucky us, now we have Jolla and Sailfish OS (: there are also platforms like mer - nemomobile - maemo - tizen... but the average user likes iPhone/iOS and Android, that is marketing of course...

Personally, I'd go with Jolla and Sailfish, as now I have N900 with maemo, and I feel the freedom, even after 5 years, it is still the one of the best phones. Communities are what we need, not capitalism.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: