Hacker Newsnew | past | comments | ask | show | jobs | submit | more amichal's commentslogin

It wasn't the only language browsers (vbscript was allowed client side in early ie) supported and also isn't anymore with wasm


> and also isn't anymore with wasm

I don't buy this, considering that wasm still doesn't actually have access to the Web APIs that JavaScript has access to (and relies on JS to even be loaded in the first place).

In my opinion, if that's what counts as being a supported language, then JS was never the only supported language to begin with since you could always compile other languages to run on the web through JavaScript (for example Emscripten with the asm.js target predates WebAssembly). The only thing that wasm currently offers over the previous status quo is that it's faster (for some workloads).


Overall good points but don't forget that pre-approval processes resulted in asking for resources that exceeded the near term needs and once approved ongoing costs were rarely fully reviewed. I have personal experience with "enterprise" clients making a huge months long process to get server resources, reminding us that changes would take 30+ days. when the project was over and we did everything we could to let them know that the servers could be spun down or put to other uses we got back a "ok thanks!" only to find them still running our project code YEARS later. This is infra that was costing them about 1 engineer FTE per year, not even a 10$/mo toy env


I wonder if this is just them aligning themselves with the new EU AI Act at the same time that they are rolling out a EU region[1]. From my understanding that act, soon to take affect makes it a requirement to explicitly explain the use cases for AI in your use of data in TOS. Before this law you didn't really have to say if you used AI

[1] https://blog.sentry.io/sentrys-eu-data-region-now-in-early-a...


I suspect the article is about returning the contributing tables in a join as multiple relations... while not possible in SQL proper this is possible with stored procedures (at least in T-SQL)

https://stackoverflow.com/questions/40013747/return-multiple...


I dont know about open source. But back back around 2000 it was fairly common to implement a "VBA host". It was also easy, a few dev days to scaffold and minutes to hours of dev time to expose functions. We did it with a suite of related applications (used internally by a early ebook punlisher) that interacted a lot with Office. Introspection, auto-complete and integrated help documentation all worked and technically minded editor staff could automate lots of thier work with Visual Studio provided IDE and debuggers etc


AND a several orders of magnitude larger investment in tech writers, and manual "How can my customer break this software" testing.

When I worked in shrink-wrapped software back in dark ages the documentation writing team and a very extensive manual QA department where each the same size as the development department. Think people trying for DAYS to find out why out of 100s of thousands of active users, a few dozen reported being able to launch 2 instances of the main window when that should not be allowed. (Fix: Race condition in the "double click" handling code with a window of a few milliseconds)


I have, in my life as a web developer, had multiple "academics" urgently demand that i remove error bands, bars, notes about outliers, confidence intervals etc from graphics at the last minute so people are not "confused"

Its depressing


I obviously cannot assess the validity of the requests you got, but as a former researcher turned product developer, I had several times to take the decision _not_ to display confidence intervals in products, and to keep them as an internal feature for quality evaluation.

Why, I hear you ask? Because, for the kind of system of models I use (detailed stochastic simulations of human behavior), there is no good definition of a confidence interval that can be computed in a reasonable amount of computing time. One can design confidence measures that can be computed without too much overhead, but they can be misleading if you do not have a very good understanding of what they represent and do not represent.

To simplify, the error bars I was able to compute were mostly a measure of precision, but I had no way to assess accuracy, which is what most people assume error bars mean. So showing the error bars would have actually given a false sense of quality, which I did not feel confident to give. So not displaying those measures was actually done as a service to the user.

Now, one might make the argument that if we had no way to assess accuracy, the type of models we used was just rubbish and not much more useful than a wild guess... Which is a much wider topic, and there are good arguments for and against this statement.


Statistically illiterate people should not be making decisions. I'd take that as a signal to leave.


Statistically speaking, you're in the minority. ;)


Maybe not in the minority for taking it as a signal to leave, but in the minority for actually acting on that signal.


That's fair. :)


The depressing part is that many people actually need them removed in order to not be confused.


But aren’t they still confused without the error bars? Or confidently incorrect? And who could blame them, when that’s the information they’re given?

It seems like the options are:

- no error bars which mislead everyone

- error bars which confuse some people and accurately inform others


Yep.

See also: Complaints about poll results in the last few rounds of elections in the US. "The polls said Hillary would win!!!" (no, they didn't).

It's not just error margins, it's an absence of statistics of any sort in secondary school (for a large number of students).


After a lot of back-and-forth some years ago, we settled on a third option: If the error bars would be too big (for whatever definition of "too big" we used back then), don't show the data and instead show a "not enough data points" message. Otherwise, if we were showing the data, show it without the error bars.


Yeah, when people remove that kind of information to not confuse people, they are aiming into making them confidently incorrect.


That is baldly justifying a feeling of superiority and authority over others. It's not your job to trick other people "for their own good". Present honest information, as accurately as possible, and let the chips fall where they may. Anything else is a road to disaster.


Some people won't understand error bars. Given that we evolved from apes and that there's a distribution of intelligences, skill sets, and interests across all walks of society, I don't place blame on anyone. We're just messy as a species. It'll be okay. Everything is mostly working out.


> We're just messy as a species. It'll be okay. Everything is mostly working out.

{Confidence interval we won't cook the planet}


Sometimes they do this because the data doesn't entirely support their conclusions. Error bars, noting data outliers etc often make this glaringly apparent.


Can you be more specific (maybe point to a website)? I am trying to imagine the scenarios where a web developer would work with academics and does the data processing for the representation? Of the few scenarios that I could think about where an academic works directly with a web developer they would almost always provide the full figures.


It really depends what it is for. If the assessment is that the data is solid enough for certain decisions you might indeed only show a narrow result in order not to waste time and attention. If it is for a scientific discussion then it is different, of course.


Most people really don’t understand error bars, see https://errorbars.streamlit.app/


"map stuff" is much harder to understand than your query implies. As an example

Have you tried understanding all the possible things that you can get in 'address_components' depending on the input params, the region you are querying from, the region you results are coming from, the geopolitical situation on that data, the entity that was matched etc etc

https://developers.google.com/maps/documentation/geocoding/r...

https://developers.google.com/maps/documentation/geocoding/r...

Don't forget entities in side of other entities like businesses within a mall.

Don't forget that the user might want a specific service from the bank. The "bank" label doesn't tell you if they accept street traffic, have live tellers, are just an ATM or a corporate office etc etc.


This is easy to solve... Just publish 'views' of the data simplified for common usecases, and also offer the raw data for those who like joining 50 tables to know extreme corner cases like if the bar down the road's wheelchair accessible toilet will be open during the summertime hour shift.


I believe the article shows screenshots of it in Apple TV (at least)

Looks like it uses the method in this article https://developer.apple.com/library/archive/qa/qa1948/_index... but i have not (yet) tried it


I followed this guide on how to install a self-signed certificate and setup the routing to my proxy server on an Apple TV: https://lucaslegname.github.io/mitmproxy/2020/04/10/mitmprox...

TL;DR it involves using Apple Configuration to make a custom mobileconfig profile to point to your proxy and then also installing the certificate with the same method.


I didnt look but http://unitsofmeasure.org which is part of the HL7 standard for electronic medical records might be an interesting source/reference

It has zillions of base units and a algebra syntax for definitions of many more

Edit: "common" units https://github.com/ucum-org/ucum/tree/main/common-units


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: