I don't buy this, considering that wasm still doesn't actually have access to the Web APIs that JavaScript has access to (and relies on JS to even be loaded in the first place).
In my opinion, if that's what counts as being a supported language, then JS was never the only supported language to begin with since you could always compile other languages to run on the web through JavaScript (for example Emscripten with the asm.js target predates WebAssembly). The only thing that wasm currently offers over the previous status quo is that it's faster (for some workloads).
Overall good points but don't forget that pre-approval processes resulted in asking for resources that exceeded the near term needs and once approved ongoing costs were rarely fully reviewed. I have personal experience with "enterprise" clients making a huge months long process to get server resources, reminding us that changes would take 30+ days. when the project was over and we did everything we could to let them know that the servers could be spun down or put to other uses we got back a "ok thanks!" only to find them still running our project code YEARS later. This is infra that was costing them about 1 engineer FTE per year, not even a 10$/mo toy env
I wonder if this is just them aligning themselves with the new EU AI Act at the same time that they are rolling out a EU region[1]. From my understanding that act, soon to take affect makes it a requirement to explicitly explain the use cases for AI in your use of data in TOS. Before this law you didn't really have to say if you used AI
I suspect the article is about returning the contributing tables in a join as multiple relations... while not possible in SQL proper this is possible with stored procedures (at least in T-SQL)
I dont know about open source. But back back around 2000 it was fairly common to implement a "VBA host". It was also easy, a few dev days to scaffold and minutes to hours of dev time to expose functions. We did it with a suite of related applications (used internally by a early ebook punlisher) that interacted a lot with Office. Introspection, auto-complete and integrated help documentation all worked and technically minded editor staff could automate lots of thier work with Visual Studio provided IDE and debuggers etc
AND a several orders of magnitude larger investment in tech writers, and manual "How can my customer break this software" testing.
When I worked in shrink-wrapped software back in dark ages the documentation writing team and a very extensive manual QA department where each the same size as the development department. Think people trying for DAYS to find out why out of 100s of thousands of active users, a few dozen reported being able to launch 2 instances of the main window when that should not be allowed. (Fix: Race condition in the "double click" handling code with a window of a few milliseconds)
I have, in my life as a web developer, had multiple "academics" urgently demand that i remove error bands, bars, notes about outliers, confidence intervals etc from graphics at the last minute so people are not "confused"
I obviously cannot assess the validity of the requests you got, but as a former researcher turned product developer, I had several times to take the decision _not_ to display confidence intervals in products, and to keep them as an internal feature for quality evaluation.
Why, I hear you ask? Because, for the kind of system of models I use (detailed stochastic simulations of human behavior), there is no good definition of a confidence interval that can be computed in a reasonable amount of computing time. One can design confidence measures that can be computed without too much overhead, but they can be misleading if you do not have a very good understanding of what they represent and do not represent.
To simplify, the error bars I was able to compute were mostly a measure of precision, but I had no way to assess accuracy, which is what most people assume error bars mean. So showing the error bars would have actually given a false sense of quality, which I did not feel confident to give. So not displaying those measures was actually done as a service to the user.
Now, one might make the argument that if we had no way to assess accuracy, the type of models we used was just rubbish and not much more useful than a wild guess... Which is a much wider topic, and there are good arguments for and against this statement.
After a lot of back-and-forth some years ago, we settled on a third option: If the error bars would be too big (for whatever definition of "too big" we used back then), don't show the data and instead show a "not enough data points" message. Otherwise, if we were showing the data, show it without the error bars.
That is baldly justifying a feeling of superiority and authority over others. It's not your job to trick other people "for their own good". Present honest information, as accurately as possible, and let the chips fall where they may. Anything else is a road to disaster.
Some people won't understand error bars. Given that we evolved from apes and that there's a distribution of intelligences, skill sets, and interests across all walks of society, I don't place blame on anyone. We're just messy as a species. It'll be okay. Everything is mostly working out.
Sometimes they do this because the data doesn't entirely support their conclusions. Error bars, noting data outliers etc often make this glaringly apparent.
Can you be more specific (maybe point to a website)? I am trying to imagine the scenarios where a web developer would work with academics and does the data processing for the representation? Of the few scenarios that I could think about where an academic works directly with a web developer they would almost always provide the full figures.
It really depends what it is for. If the assessment is that the data is solid enough for certain decisions you might indeed only show a narrow result in order not to waste time and attention. If it is for a scientific discussion then it is different, of course.
"map stuff" is much harder to understand than your query implies. As an example
Have you tried understanding all the possible things that you can get in 'address_components' depending on the input params, the region you are querying from, the region you results are coming from, the geopolitical situation on that data, the entity that was matched etc etc
Don't forget entities in side of other entities like businesses within a mall.
Don't forget that the user might want a specific service from the bank. The "bank" label doesn't tell you if they accept street traffic, have live tellers, are just an ATM or a corporate office etc etc.
This is easy to solve... Just publish 'views' of the data simplified for common usecases, and also offer the raw data for those who like joining 50 tables to know extreme corner cases like if the bar down the road's wheelchair accessible toilet will be open during the summertime hour shift.
TL;DR it involves using Apple Configuration to make a custom mobileconfig profile to point to your proxy and then also installing the certificate with the same method.