If you leave the country without setting up SMS you can’t ever use 2FA. They claim to support adding foreign numbers, support people being abroad, support adding new DigiD accounts from abroad, but oh no you can’t just add a number. Not even by going to an office or doing a virtual interview. I would think this violates EU law on discrimination. If you live in the UK post-Brexit it’s now totally impossible, I believe (since you aren’t even allowed to make a new account).
Holder of Dutch passport here. I created a DigiD account from France, using a French phone number.
You plan a video conf using their web app, connect at the right time, and show your passport when asked.
As an aside, I login without using their app, as my Android phone does not support Google Play.
Don't know what happens if you don't have a dutch passport though. I guess they are under no obligation to render services to people that are neither citizen nor national.
A bit like when I got married and the French state wanted proof that I wasn't already married before, during the period I had lived in the UK. The UK services wouldn't give me the time of day, since I was neither British nor living there. I ended up getting an official looking note from the Dutch embassy to the UK, stating that "to the best of their knowledge I wasn't married" =)
Create - from the EU - yes. As I said: you cannot add to existing that you already use extensively. And not create new from outside EU. That’s what makes it so shambolic. They clearly have the ability to both do it technically and to verify appropriately.
No problems using similar UK services for EU citizens I know, nor non-EU. Usual bank/address shenanigans at the start, but no issues with government gateway etc.
After moving to the States and losing my Dutch mobile number I was also not able to use it for more than 10 years.
During covid the government provided an ability to schedule a zoom call to verify identity remotely and set up Digid with a foreign number so I finally have it.
"Some of you might have concerns that chlorine gas will develop, but because chlorine is so reactive, and because the salt solution is dilute, it’s safe for this setup."
Yes, as the text in the part I quoted says "dilute" now as well. Which is good as there are many "bucket chemists" out there, people who think, why add a pinch as more will be better and quicker mindsets kick in.
Right. It may also be possible that there is some hope of this action restoring normality. i.e. as a result of this protest, the Core Team become more accountable. At that point, I assume the specific incident(s) may be dealt with according to the Code of Conduct, which may or may not involve transparency. Either way, it could be premature and possibly prejudicial to air those now.
There are a limited number of spy satellites, and their orbits are fairly fixed. If you need to look at a site you have to wait until a satellite passes it (and the people at the site will be able to know quite far in advance when a satellite is coming). And there's no way to delay the pass until the cloud cover clears.
You could potentially hide a satellite from radar observation the same way you can for planes. That's apparently what the Misty program [1] did. Synthetic aperture radar could take care of clouds, though you lose color. Agreed about maneuverability I think.
What kind of intelligence mission benefits from a fast flyby but cannot be performed by a satellite?
The orbits of satellites are known. The target can arrange activities to occur when there is not a satellite overhead. A fast flyby can occur at any time with little or no warning.
Like handling security cameras... you can fire an artillery shell at a security camera, but most of the time security cameras are mission ineffective if you just shine a bright enough flashlight (or laser) at them.
There's no need to cause an international incident by blowing something up when all you need is a bright light for a couple easily predictable minutes.
On the other hand a stealthy hypersonic flyby is probably invisible.
Another military aspect civilians never want to talk about is photo analysis depends on illumination and rando satellite passes can't see into valleys and get messed up if the shadows are weird enough. Just because technically a satellite passed within range of Afghanistan as a whole country, doesn't mean you can see what's happening on the wrong side of a mountain with bad illumination and bad view angle. There's a lot of satellites but not THAT many. Afghanistan used to have boots on ground to launch UAVs but not so much anymore.
Satellites were great during the cold war when the targets were consistent, and the satellites could be put in orbits with good coverage of those areas (eg Russian military bases). Spy planes like the SR71 and U2 that fly high and fast to evade detection or destruction are essential for getting immediate photos of an area that may not have regular satellite coverage.
They probably won’t send in U-2 or MQ-9 to stick ninja swords through the roof of a luxury sedan past rings of S-300 SAM batteries few hundred miles from shores.
"In order to get all of the details about the tuple from the function one must analyse the bytecode of the function. This is because the first bytecode in the function literally translates into the tuple argument being unpacked. Assuming the tuple parameter is named .1 and is expected to unpack to variables spam and monty (meaning it is the tuple (spam, monty)), the first bytecode in the function will be for the statement spam, monty = .1. This means that to know all of the details of the tuple parameter one must look at the initial bytecode of the function to detect tuple unpacking for parameters formatted as \.\d+ and deduce any and all information about the expected argument. Bytecode analysis is how the inspect.getargspec function is able to provide information on tuple parameters. This is not easy to do and is burdensome on introspection tools as they must know how Python bytecode works (an otherwise unneeded burden as all other types of parameters do not require knowledge of Python bytecode).
The difficulty of analysing bytecode not withstanding, there is another issue with the dependency on using Python bytecode. IronPython [3] does not use Python's bytecode. Because it is based on the .NET framework it instead stores MSIL [4] in func_code.co_code attribute of the function. This fact prevents the inspect.getargspec function from working when run under IronPython. It is unknown whether other Python implementations are affected but is reasonable to assume if the implementation is not just a re-implementation of the Python virtual machine."
I'm not very convinced without further information - it sounds like it was already solved in CPython, and bug in IronPython? The whole PEP reads as if it's looking for excuses for removal.
"These claims were investigated by the FDA and the Centers for Disease Control, which found no connection between the vaccine and the autoimmune complaints."
Yes, of course, but you then appeared to query the "unfounded" nature of the claims. I thought you had missed the part where they turned out to be indeed unfounded.
You should assume the strongest interpretation of the comments you reply to, it leads to better conversations and is part of the site's guidelines.
I disagree about the fact that they're unfounded having read the comment I replied to, therefore it follows I do not believe the results of the FDA's investigation are conclusive. I illustrate this with the Hep-B vaccine for which the health-autorities have not found a causal link, yet the legal system implicitly acknowledges one. There's also my personal experience with the Pfizer vaccine of which some side-effects are not being reported, putting into question any scientific conclusion that'd be reached from the incomplete data that is being gathered.
Wait. You quoting the site's guidelines because the poster assumed you were questioning the unfounded claim but now you are stating you disagree with the unfounded claim?
Assuming I'm questioning the unfounded claim is not the issue. I'm quoting the site's guidelines because the poster copied verbatim the message I was responding to. This is either quite snarky or assumes very low reading ability on my part.
In short the exchange was:
A: "The claims are unfounded because the FDA found no causal link."
B: "The claims might not be unfounded because the absence of evidence of a causal link is not sufficient to qualify them as such, as is evident by this legal precedent and my personal experience."
C: "`the FDA found no causal link.`"
I hope this clarifies things, I'm afraid we're getting very meta.
I'm not at all against versioning, but found glibc's implementation a source of frustration.
It would be great if you could install Qt 5.9, 5.12, 1.14 in parallel and have apps use the latest version (except for that one app that triggers a bug where you fix it to 5.9). This is an actual problem that occurred at work, I had to statically compile Qt in that case. Glibc's idea of lib versions doesn't help here at all if I understand correctly.
That’s not so much glibc’s fault, but the Unix philosophy of /lib, right? Windows’ “solution” to this is to basically require applications to provide their own runtime (such as Qt, GTK, etc.).
More relevant to the question of libc, the traditional Windows solution was to require applications to distribute or statically compile their own Cruntime. Every release of Visual Studio had its own C runtime. To make matters worse, there was a system C runtime, but most applications didn't link against it, MinGW being a notable exception. AFAIU, this was one of the root culprits for the origin of the notorious DLL Hell. Applications would often crash because libA malloc'd a pointer, which was free'd by libB. That is, there was no shared, global heap. This was a far less obvious pitfall than mixing objects between libA and libB or even libA-v1 and libA-v2. Even if libA-v2 was otherwise backward ABI compatible with libA-v1, if they were built by different versions of Visual Studio you could still end up with heap corruption. Indeed, this could happen if two vendors compiled the exact same source code. If an application install overwrote a library in the shared system folder, boom, applications could begin crashing for no obvious reason.
AFAIU, over the years Windows tried to mitigate this with various hacks for detecting cross-heap pointer freeing. But last time I checked their final approach was to guarantee backward compatibility (including backward heap compatibility) for all future Visual Studio C runtimes; ditto for the system C runtime. IOW, Microsoft committed themselves to maintaining a lot of internal runtime magic to preserve binary compatibility across time, which is functionally what glibc has done using version symbols. Of course, it also became less common on Windows to keep DLLs in shared folders.
The system and the app C runtimes are one and the same in Win10+.
As far as cross-DLL interop: the usual solution was to avoid the C stdlib altogether, and just use the underlying Win32 API functions to manage memory that has to cross the boundary. Or, in the COM land, every object manages its own heap memory, exposing it via the ABI-standard IUnknown::Release.
Disk space is cheap, particularly when we're talking about libs that consume a few hundred kb each.
On the other hand, software distributions should continue to rely on shared libraries, for their own software, but third-party compiled apps that are intended to be cross-distro/cross-arch should try to bundle as much as possible.
This is why I prefer /opt over /usr/local for third-party compiled apps.
Maybe. Apps wouldn't have to bundle their own versions of libraries if those libraries actually cared about backwards compatibility. Bundling is just an unfortunate workaround for libraries continually breaking their ABI.
Windows 10 can run virtually all apps compiled against Windows 2000 just fine, and those apps did not have to bundle their own graphical toolkits. Windows has gone through several new toolkits but they always preserve the old ones so that old programs continue to work.
By contrast GTK has regularly broken things between even minor version updates. Distributions also drop the old major versions of toolkits much more quickly. GTK3 was first released in 2011, but by 2018 most distributions no longer provided GTK2 pre-installed.
Is it any wonder that no one can ship and maintain a binary app that targets GTK without bundling it? Of course bundling sure looks like an attractive solution in this environment, but it's the environment that's the problem.
>Windows 10 can run virtually all apps compiled against Windows 2000 just fine, and those apps did not have to bundle their own graphical toolkits.
They do if they're buildtusing Qt, GTK, WxWidgets, etc. Also shipping DirectX and VC++ runtime libraries, .NET runtimes, etc. was and still is (for whatever reason) still common and stuff just doesn't work without it. Plus whatever else the program needs, like a whole python runtime or something.
>Is it any wonder that no one can ship and maintain a binary app that targets GTK without bundling it?
Distributions have no problem doing it. If you're shipping something outside the distribution why would you ever expect that shipping only half your program would be feasible? There is no OS where that works.
Windows has many solutions to this. It is entirely possible for the apps to use a shared dynamically linked runtime, and the OS provides mechanisms to have several versions installed globally side by side, loading the correct one as needed for each app.
> It would be great if you could install Qt 5.9, 5.12, 1.14 in parallel and have apps use the latest version (except for that one app that triggers a bug where you fix it to 5.9).
The underlying library-loading system on Linux handles this just fine, and has for decades (look up "soname" for details).
The problem is that system package managers want to load "just the latest" & maintainers have to take extra steps to enable loading of multiple versions.
People started to get a clue about this 5-10 years ago though. It used to be much worse. I recall having to deal with it daily and I don't think about it much any more - more like a weekly exception.
In fact, now also I get some places trying to be too helpful to the point of being confusing. For example, a trainer store that auto-detects location and lists prices in GBP but sizes in US (without indication) and doesn't actually have an EU/UK distribution centre. So whoops it's going to take ages to arrive, it gets re-listed in USD at the final checkout, whoops again, 3% charge from my credit card, and then the wrong size arrives. But at least I knew the rough GBP amount!
I don't get how the front page could be so clearly misleading and not expect to get found out. It's all about nothing hidden and "that's all you pay" until you click pricing and then there are many different charges and models.
Nobody expects to be able to pull a 10,000gallon tanker truck up to a "free refills" soda station in a fast food joint and fill it.
Anybody who's in the target market for these also understands the "Includes IoT cellular connectivity for 10 years" isn't going to mean "Cool! I'll get one of these to stream 4K video to youtube 24x7x356x10, sweet!!!"
No, I'd expect it to mean free refills for me for the duration of my meal, like most people would.
My point is, there is quite an involved pricing model - 5 tiers, each tier has multiple charges in it, none of them obviously correspond to the advertising on the front page. They even repeatedly use the phrase "for X MB" which is unclear to me. I first read it as storage. I assume it means total upload/download? The numbers for this on other devices on the front page are not mentioned on the pricing page. Charging categories also have a mixture of sold-by-the-MB and sold-by-usage, as well as pre-sold - within a single tier.
The front page is selling "look, it's very clear and simple" and it's just obviously not clear and simple. Unclear why that view garners downvotes - this is honest feedback about their marketing and pricing. Even if they are mostly talking about the free tier, it's not clear
I’m not in the iot space and still understood that they probably have some tons of optimizations to reduce data on the devices and probably buy wholesale data to form one big pool.