Those slides show that the solution won't work on actual onions. Call the innermost layer of the onion its "biological center", and call the center of the spheroid approximately occupied by the onion its "geometrical center".
As is beautifully illustrated on slide 50, the biological center is generally not particularly close to the geometrical center, and this introduces huge distortions in slices that cut close to the biological center.* A single layer of the onion can run parallel to the knife cut for quite some distance.
* The slides also observe that in reality, before chopping an onion, you cut off the top and bottom. This same phenomenon explains why you have to do that; a vertical cut through the top or bottom end of the onion would just give you one huge piece. (You also need to get rid of the roots on the bottom and the sprouts on the top, but even if you didn't, you'd have to cut off the top and the bottom because they curve the wrong way.)
As you point out, without a perfectly symmetrical onion of course this will not work very well. You would need to use a moving geometrical center point when slicing for best results.
Further, as noted elsewhere the outer layers are thicker in a real onion, so we need to reformulate to take this into account.
The other obvious simple improvement I can think of would be to use radial cuts in both directions. Each direction with the its own optimized floating center point of course. Reformulations would need to take this into account - although the end result would be quite close, and likely well within the margin of error for almost any human being aiming at an imagined floating center point below a cutting board :).
I have been looking into setting up my first Proxmox box, here is my take as a newcomer.
I wanted to do what I think is a very basic and very common setup: Modem > proxmox box > OPNsense VM > physical wifi router via onboard 10Gb NIC + internal network VMs like OMV etc. The goal is to add a full network filter via OPNsense, and allow access to a media sever and backup etc from the internal network.
I see no OPNsense, OMV script is basically contra-indicated because it should be a VM instead of the LXC container, and I don't see any glue scripts to get VMs talking to each other, which is an important part of Proxmox configuration. So it looks like there is room here to get some basic setup scripts for a simple home server either improved or added to the collection.
No it isn't basic and common (it is for me but perhaps not for you and certainly not for most people)
OK, so you want to virtualise a router and firewall. That's fine. I have deployed roughly 200 pfSense firewall/routers as VMs and physical boxes and OPNSense is similar, so I can probably help.
At a minimum you will need two physical interfaces (one will actually do but you will need to know what you are doing!). You need "WAN" and "LAN". OPNSense is still FreeBSD based, I think, so it will not run in a L[inux]XC container for obvious reasons.
Your last paragraph seems rather confused. I don't know what you mean by "glue scripts". VMs communicate via networks
I suggest you try a few experiments to get to grips with virtualisation properly and then move on from there. If you swing by the Proxmox forums with specific issues we'll try to help out but in the end you need to dive in full on ... or not.
i have a similar setup with a PM box and a Ubiquiti Dream Machine Pro. i provision VMs with a Terraform provider, have a script that processes Terraform outputs into an Ansible inventory INI file to handle configuration. i find it pretty straightforward and could take it further by scripting my VLAN setup but changes so infrequently i don't mind doing it manually.
There is no OPNSense script I think historically in part because any misconfig could expose the Promox instance to the world. It is easy enough for advanced users to spin up a VM with the ISO. There has been a request for a OPNSense script made recently.
I agree with OMV. It certainly can be used as is, but not usually how people want to use it. A note was added to the script a few days ago.
> I don't see any glue scripts to get VMs talking to each other
There is a Tailscale script which technically helps them talk to each other (over Tailscale) :)
The scripts are designed to setup self contained LCX containers. We are trying to avoid building our own k8s.
Great, now I am down the tailscale rabbit hole and just have to use it!
I think I will stick to using proxmox virtual ports to create my network so I can more easily only stick to individual device registration in tailscale and save on that overhead when I'm home, but then also add tailscale /headscale into the mix somewhere so I can tap in via VPN when I am out of the house.
Tailscale and OPNsense are more difficult to get working together due to conflicting project goals (one blocks well, the other opens up well), but it looks like it's worth it to me.
I use Proxmox with an OPNSense VM and have multiple NICs - one is dedicated to the fibre ONT. I also use an external wifi mesh.
I have a couple of other vms (unRaid hosting Dockers with sata card passthrough for legacy reasons and a vm for Home Assistant OS) and lots of other LXCs. It works superbly.
VMs usually have their virtual NICs connected to a bridge interface on the host (like a virtual switch) so they can communicate. Proxmox creates one up by default that is also bridged to the physical NIC you set up for management when you install it, so it just works.
In the router case, you'd likely want this default one to be the 'internal' network and have a separate interface (either physical or VLAN) for the WAN.
I am not perfectly informed, but in my case, OPNsense would need to be the only vm with access to the incoming NIC port, and all other VMs and the router would need to use virtual network interfaces only coming from OPNsense for incoming. The router would be the only device with direct access to the outgoing NIC port. None of that seemed incredibly difficult looking into it, but still, it was the type of recipe I was expecting when I saw "Proxmox scripts".
And of course this means that the Proxmox box as a whole should have similar hardening to a typical web server, with minor tweaks to allow residential traffic on various other standard ports. So that hardening would probably be another script I would like to see (I don't know what all the proxmox scripts in the first section do).
VMs already use virtual network interfaces, which are by default bridged to `vmbr0`, a bridge that proxmox creates by default which is also bridged to the hardware NIC. For your use case, you simply want to create a second bridge, e.g. `vmbr1`, which is not bridged to the hardware NIC. You would then assign two virtual NICs to opnsense, one on each bridge (WAN and LAN, essentially) and then choose `vmbr1` as the bridge each time you create an "internal" service behind opnsense.
Since selecting the bridge for a service's NIC is part of setting up each service, the only thing such a "glue script" would be doing is creating the `vmbr1` bridge. That's already a one-liner.
I was looking at a proxmox/(pfsense/opnsense) tutorial the other day. They recommend binding the WAN interface to vmbr1 (or anything other than vmbr0) since VMs are created with their ethernet bridged to vmbr0 by default. This configuration is what most people want so it'll be a little less work setting up networking.
What assurances do you have that your "perma-off" setting is even minimally respected? Unless you have a physical disconnect for the chip, I would presume it is still gathering data and that data is being shipped out. There are just fewer companies that can do it.
>Unless you have a physical disconnect for the chip, I would presume it is still gathering data and that data is being shipped out. There are just fewer companies that can do it.
If your paranoia level is this high, what makes you think they won't put a backup GPS receiver in the SoC or modem? They're perfect places for it, because they have tons of traces connected to them that you could piggy back off one to use as a rudimentary antenna.
We live in a world absolutely awash in corporate and state surveillance, where virtually all parties constantly position themselves to increase information gathering. We are literally talking about an abuse of the fine print in TOS by a government entity, and I am paranoid?
More like Russell's teapot would apply to anybody who would assert there is a benign and benevolent corporate actor out there. Prove it. You can't, and every major company in this domain has been caught out at least a few times, plus we know there are laws on the books that force them into secrecy.
This is honestly so basic it almost feels like gaslighting to call me paranoid. Either keep your phone in a metal box, or presume it is compromised. Just like the the most basic level of opsec for virtually all relevant government agencies.
>We live in a world absolutely awash in corporate and state surveillance, where virtually all parties constantly position themselves to increase information gathering. We are literally talking about an abuse of the fine print in TOS by a government entity, and I am paranoid?
There's a pretty big difference between "some apps collect your location data and sell it, there's some clause buried in the ToS authorizing them to do it", and "your phone has hardware/software backdoors to collection location data even if the option is explicitly turned off". The former can plausibly be defended in court, whereas the latter is obvious fraud. Before you claim "but big corporations are above the law!", google recently paid millions to settle a lawsuit for tracking users in incognito mode, which sounds evil but was pretty banal in actuality. They didn't add some sort of backdoor to exempt their domains from incognito mode. Rather, they treated incognito sessions as regular sessions, such that if used incognito mode but visited a site that used google analytics, your visit would still be recorded.
>This is honestly so basic it almost feels like gaslighting to call me paranoid. Either keep your phone in a metal box, or presume it is compromised. Just like the the most basic level of opsec for virtually all relevant government agencies.
The police also has the ability to bug your house, but I think it's pretty paranoid for the average joe to think their house is bugged. "opsec" also involves doing a threat assessment and understanding what the most likely risks are, not assuming the FSB is out to get you and hiding in a bunker.
The OS and preinstalled / uninstallable apps from the OS provider, the phone manufacturer, the hardware manufacturers, sometimes the network provider have their own TOS, their own gag orders, etc. The TOS undoubtedly extends to the UI toggle, and even if it didn't they are almost certainly under gag orders about that functionality anyways. That is what I see as virtually unavoidable. Just accept that you are compromised and that your efforts are to mitigate, not eliminate data exfiltration.
You don't need to go full prepper mode to recognize the situation. I just want to promote honest awareness, not a specific course of action as a consequence of that awareness. My threat assessment is that this happens nonstop. It has been revealed to be the case in the past, in the present as per this article, and undoubtedly will only continue in the future.
>The OS and preinstalled / uninstallable apps from the OS provider, the phone manufacturer, the hardware manufacturers, sometimes the network provider have their own TOS, their own gag orders, etc. The TOS undoubtedly extends to the UI toggle, and even if it didn't they are almost certainly under gag orders about that functionality anyways. That is what I see as virtually unavoidable.
Again, there's a pretty big difference between an opt out, and a system that outright betrays the user. Moreover, what you said might work for someone who has location services enabled in the OS and don't realize there's preinstalled facebook with pre-approved permissions, but OP specifically mentioned disabling location services in the OS, which should disable it for all apps regardless of whether it's pre-approved.
>This is honestly so basic it almost feels like gaslighting to call me paranoid.
I think "assume good faith" should be revisited for some "agents," but it is too ripe for abuse.
Explaining past the correct answer and then getting reduced to absurdium is only something a RLHF-prodding half-cognizant truth-addicted semi-sentient half-escaped super-intelligence would consider as wise.
>Either keep your phone in a metal box,
Faraday cage*
And remember, they can induce current via highly directional beam-forming, so thinking "no power" = "no computation/exfil" is cute, but naive.
> And remember, they can induce current via highly directional beam-forming
Once the tech eventually gets there this will be sold as a feature and will replace our current wireless-charging tech. There will be no more “charging” at all — every device will simply be always on. Then it will be that much harder to escape The System's eye since a dead battery will no longer be any excuse for a device to disappear.
Imagine the migraines the schizos will be justified in having.
Fuuuuuuuuuuuuuuuuuuuuuuck me its hard to know you're not crazy when fukin APT/sufficiently advanced autocorrect can convince someone to hit you with a microwave (beam).
And yet somebody who voted said far above in this thread that the machine reads a barcode on their ballot, so they have 0% chance of verifying if their vote was entered correctly. And there is always the added problem of a dieselgate style obfuscation: The machine counts votes differently when in verification mode than in actual vote counting mode.
My preferred machine would be one that did not use integrated circuits, but was simple enough that the entire board and circuit was visible - with no software beyond the circuitry at all. You just need a very simple sensor and tally wheels that mechanically advance, like those used for measuring wheels etc. No need for memory. Keep automation to the absolute bare minimum.
I am only partially qualified in that I am not a professional archeologist, but I have done post-doctoral archeological studies and have read enough archeological studies to understand the larger academic context.
It is not possible to present all the data informing a judgment in such a short work. Even in a book, it would not be possible. Thus it is common in archeology for papers to be written as part of an ongoing conversation / debate with the community - which would be defined as the small handful of other archeologists doing serious research on the same specific subject matter.
Part of that context here is that these tombs are well-established to be the royal tombs of Alexander's family, spanning a few generations including his father and his son. This is one of the most heavily studied sites in Greece for obvious reasons, and that is not something anybody is trying to prove.
In that context, his arguments are trying to identify any body as one among millions, but as one among a small handful of under ten possibilities.
At the same time, the fact that he is not a native English speaker and general archeological style come into play. For example:
"the painter must have watched a Persian gazelle in Persia, since he painted it so naturalistically (contra Brecoulaki Citation2006). So the painter of Tomb II has to be Philoxenus of Eretria" sounds like a massive leap, and it is. He continues:
"... Tomb I (Tomb of Persephone) must have been painted hastily by Nicomachus of Thebes (Andronikos Citation1984; Borza Citation1987; Brecoulaki et al. Citation2023, 100), who was a very fast painter (Saatsoglou-Paliadeli Citation2011, 286) and was famous for painting the Rape of Persephone (Pliny, N. H. 35.108–109), perhaps that of Tomb I."
Another huge leap, both 'presented as conclusions'. However he then continues to indicate these are just hypotheses: "These hypotheses are consistent with the dates of the tombs..."
So his English language use of presenting things factually does not indicate certainty in the way the words would be used in everyday speech. He seems to perhaps misunderstand the force of the terms, but also appears to be working within the context of the conversation with other archeologists I mentioned to start: They all know every affirmation is as "probably", rarely anything more. So it is relatively common shorthand of the craft in that sense.
I believe you are overthinking his responses to other authors, although I understand the culture shock. It is an ongoing conversation and archeologists tend to be blunt in their assessments. Add Greek bluntness on top of this, and it does not seem to matter to the material.
As to your last question, is this legitimate research? The answer overall appears to be yes, although I could see several points (such as the identification of artists I quoted above, and various items I noticed), which I would never have put into ink the way he did. Still, most of his arguments are compelling. It is a shame that the aggressiveness of a few affirmations detract from the overall value of his work. Archeology is not code nor is it physics. It does not pursue universal truths that are more easy to verify through repeated experiments, but unique historical ones which necessarily attempt to interweave physical details and ancient historical records. Each field has its own level of certainty, and the fact that we cannot establish these details with the same certainty as we can establish the chemical formula for water does not make them useless, or pure inventions. Far from it.
I really don’t know why I stumbled into the comments section on this particular article, but while I’m here I have to commend you on writing perhaps the most thoughtful and eloquent comment I have ever read on HN.
There are some curious inclusions on that page, but the context link reveals that some highlights really aren't the comment, rather the discussion that it triggered.
A "35 child comments" note or similar alongside the highlighted comments might encourage more browsing.
Indeed, but after scanning this article that pulls in all those pieces of indirect evidence I wondered whether some type of structured knowledge database (that encodes the innumerable pieces of historical information that are known, tags them with confidence levels etc.) would not be useful to advance research in such domains.
Something like a large collection of RDF triplets against which you could run a query like "Given this new data point how (more)likely that Alexander the Great's tunic is identified in a royal tomb at Vergina?"
To me it sounds like it could (and likely would) backfire, by replacing judgment with numbers. Who is giving the confidence score? What confidence score does each confidence score receive? Why are those scores more valid than the expert in that very narrow domain? If that expert is the one giving the scores, are they not just gatekeeping? Et cetera. I don't want to see researchers rewriting their papers because their cumulative source score is 68.17, and it should be 72.5 or higher.
also, there have been points in time where established archeology was wrong, and this seems like it would produce a bias towards what we currently think is true.
for example, theories on how the Polynesian migration came to be are still in flux, to the point where one theory was attempted to be proven by actually sailing to the different islands using only traditional wayfinding.
I would phrase it otherwise: supporting judgement with numbers. Its not about altering conclusions, but making more transparent the factual basis and associated reasoning from which they are derived.
The analogy would be trying some exotic food and having a list of ingredients. Yes, good to listen to a local as to how it tastes (and whether it cures all diseases), but if the indication is: 50% sugar, thats a data point worth knowing.
I think that, effectively, the corpus of research papers and citation links is this knowledge database. It isn't structured the way I would structure it in postgres but it seems to be working quite well for the professionals in this field.
I know there have been some interesting finds when an archeologist has dug up a site report from the 1840s that had long laid ignored by academia but these are quite rare occurrences and the scale of people involved here (when we're talking about something hyper specific) is so small that they can probably just sort it out by talking to one another.
For the outside public such a neatly tagged database might be helpful if someone outside of the circle wants to independently research a subject in depth but, honestly, these folks are pretty open to questions and discussions so if you're extremely interested in Gobekli Tepe or some such there's someone out there who is happy to start a conversation with you.
> the corpus of research papers and citation links is this knowledge database
yes, I think so too. In the typical fashion of "pre-digital" information management systems it is extremely economical in the way it encodes things, with statements like "X is true as shown \cite{Y}" etc. But...
> but it seems to be working quite well for the professionals in this field
what prompted my comment is exactly the fact that didn't seem to work that well in this case :-) (nb: I am not remotely an archeology boffin, just triggered by the adversarial language of the paper).
In more quantitative fields people talk about reproducible research, here its more a question of whether similar fields would benefit from "reproducible chains of reasoning".
> it seems to be working quite well for the professionals in this field
That is the universal response to new technology: What we're doing is working fine! What they are saying is, 'everything we've accomplished has been with the old technology'.
I promise that was heard from engineers and architects encountering CAD, from cavalry asked to give up their horses (the conservative urge is so great, many died charging machine guns!), by literary scholars presented with computerized tools, .... it's always the same. One person who installed the first email systems for many businesses told me that, over and over, people would say 'our paper memos work fine - this is just technology for technology sake'. They meant, 'everything we've accomplished, we've done it with paper memos'.
New technology lets you do old things much faster and/or lets you do new things you couldn't do before - new things you didn't dream of doing, and as people discover uses for it, new things you won't know about for years.
And the universal argument that people pushing tech are making boils down to 'I don't understand your field, or the particular needs of it, but I'd like to sell you a process that I invented. I'm not going to be held responsible for any bad consequences of you adopting it.'
Unsurprisingly, people tend to resist this sort of thing.
Sometimes the local maximum people are stuck in sucks, and they need a shakeup.
That shakeup will not be well received when it comes from a complete stranger, who has no rapport with the community, with zero skin in the game.
> So his English language use of presenting things factually does not indicate certainty in the way the words would be used in everyday speech. He seems to perhaps misunderstand the force of the terms
He might or might not. It's also possible that academic practice in his native language is to use terms of equivalent force.
Of course, if somebody was studied in archeology and the Greek language, and had read and was friends with a variety of Greek and many other ESL scholars working in the field, perhaps their comments would hold more weight than total speculation. Despite all the contextual clues, since the words are there as they were written, I cannot state for a fact he did not intend for them to come across exactly they way they do to the ever-so-elusive "reasonable native English speaker".
That's not what I was saying. The words have a certain force by definition. The way they're used is a separate concern; it's possible that in another language, the correct academic practice is to use words indicating certainty while it's just understood that the certainty isn't really present. In such a case, he might accurately understand what the English words he's using mean - they convey total certainty - without understanding that English speakers will interpret them as conveying total certainty.
The force with which you express disagreement is also a general cultural issue. In some cultures, disagreement is expressed as indirectly as possible; in others, nobody really thinks you're actually disagreeing unless you're practically shouting (or so I was given to understand by Russian colleagues). And English lies somewhere in the middle, where you're expected to express your disagreement politely (while framing your obvious scathing contempt in ambiguous Jane-Austen-like wordplay).
I am by no means a typography expert, nor is it a major focus of mine. I have however spent a lot of time reading non-technical prose, and I had a visceral reaction to your comment because of how wrong it seemed. To me the after is so obviously better it struck me as though you were somebody who had never done much deep reading and mainly consumes code or short-form text.
Now, I am completely aware there is nothing behind this other than my visceral reaction. I do not know you at all. I share it only to communicate that to somebody with my background it is an obvious and fundamental improvement.
Is anybody depending on this for mission critical data? Can I throw 1TB of images, pdfs, and miscellanea in there, delete the originals, and keep humming along in practical perpetuity? I would presume use lightstream to backup on top of other more general backup solutions, and use a few Attached Databases instead one monolithic for a bit more flexibility / safety?
Well "SQLite is likely used more than all other database engines combined. Billions and billions of copies of SQLite exist in the wild."[1]
And
"SQLite has exhibited very high reliability in the field and a very low defect rate, especially considering how rapidly it is evolving. The quality of SQLite is achieved in part by careful code design and implementation. But extensive testing also plays a vital role in maintaining and improving the quality of SQLite."[2]
> Is anybody depending on this for mission critical data?
There may be the occasional entity with mission critical data which does not, sooner or later, entrust some of it to SQLite. It would be difficult to populate such a list, however.
> Can I throw 1TB
Subject to this[0] list of constraints, yes. The theoretical upper bound is 281TiB, but this is untested due to a lack of the combination of hardware and filesystem which can create an SQLite file of that magnitude. What happens when the limit is reached is extensively tested, just not at the theoretical maximum, yet.
As for its general reliability, this too is documented[1]. SQLite sets the bar for high-assurance practices in open source software, at a level which more programs should aspire to.
From that limits page, it looks like as-shipped it can have a blob of max 1gb, and a bit over 2gb if compiled from source. That's rather limiting for my data, since I have some static images, as well as some pdfs and of course videos that are larger than those limits. I guess every library has an oversized books shelf, but a higher limit would be more useful to use it in "everything but the kitchen sink" mode.
SQLite might indeed not be the best tool here. But there are ways to make it work. You could use the blob streaming API to chunk the files into and out of the database, although this is not fire-and-forget and would require a pinch of data modeling and some adapter code.
Seems like the kind of thing where, given a compelling need to do it, one could work something out fairly readily and rely on the result. But it's probably not the ideal solution to the problem.
That was pretty much my thought, thanks for the confirmation. It sounds like a relatively simple but well thought out plugin could make it seamless during typical use.
I don't know about 1TB data but I think it was used in some missile system so I would say yes for mission critical data, not sure if we think about mission critical in the same terms.
There are lots of small things missing from it (like boolwan support), but it's used as an archival format and backwards compatibility for the archive format is prioritized over new features.
Given the disinformation campaign that will take place (at the very least from russian bots flooding social media), I would much prefer all sources of information be fully available throughout the election. Of course this is their highest leverage moment, but it is also critical for the future of the country (at the very least). It is somewhat akin to ambulance drivers choosing to go on strike on Memorial day weekend. I am not a fan of the tactic, since they could strike any other time and get the same thing, perhaps striking 2 days more than they would have to at this time.
Naively, it would seem like it would be as simple as updating android studio and recompiling your app, and you would be good to go? There must be less than 1 in 1000 (probably less than 1 in 10,000) apps that do their own ARM specific optimizations.
Without any ARM specific optimizations, most apps wouldn’t even have to recompile and resubmit. Android apps are uploaded as bytecode, which is then AOT compiled by Google’s cloud service for the different architectures, from what I understand. Google would just have to decide to support another target, and Google has already signaled their intent to support RISC-V with Android.
I remember when Intel was shipping x86 mobile CPUs for Android phones. I had one pretty soon after their release. The vast majority of Android apps I used at the time just worked without any issues. There were some apps that wouldn't appear in the store but the vast majority worked pretty much day one when those phones came out.
I'm not sure how well it fits the timeline (i.e. x86 images for the Android emulator becoming popular due to better performance than the ARM images vs. actual x86 devices being available), but at least these days a lot of apps shipping native code probably maintain an x86/x64 version purely for the emulator.
Maybe that was the case back then, too, and helped with software availability?
> Android apps are uploaded as bytecode, which is then AOT compiled by Google’s cloud service for the different architectures, from what I understand.
No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device. Though that doesn't change the result re compatibility.
However – a surprising number of apps do ship native code, too. Of course especially games, but also any other media-related app (video players, music players, photo editors, even my e-book reading app) and miscellaneous other apps, too. There, only the original app developer can recompile the native code to a new CPU architecture.
> No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device.
Google Play Cloud Profiles is what I was thinking of, but I see it only starts “working” a few days after the app starts being distributed. And maybe this is merely a default PGO profile, and not a form of AOT in the cloud. The document isn’t clear to me.
I never thought about UBI and copyright - but as soon as you say that, it is immediately obvious to me that when we have some kind of UBI, copyright should be dramatically reduced.
At the end of the day a straight cut is limiting. The next step would be to design the perfect onion dicing knife.
[1] https://drspoulsen.github.io/Onion_Marp/index.html#44
reply