Hacker News new | past | comments | ask | show | jobs | submit | monospaced's comments login


Thank you for that!


You can sign up on their website: https://www.stackit.de/en/

While there are no free credits the services are priced pay per use to the minute with a much simpler pricing model than the large hyperscalers like AWS. See prices for EC2 here: https://www.stackit.de/en/pricing/cloud-services/iaas/stacki...

You can find the docs here: https://docs.stackit.cloud/stackit/en/knowledge-base-8530170...


Note that you need to be Incorporated in Germany, Austria or Switzerland to use it. And they dont allow individuals to open accounts. Only companies.

"The European cloud" that doesn't allow sign ups from Europe is extremely ironic.

I don't know how they keep getting all this press without actually delivering anything


Lidl got SAP's award for best customer a few years before admitting they have wasted half a billion on SAP implementation.

It's the same thing again.


> award for best customer

I've never heard of this. Does it mean best cash cow?


I expect nothing less from SAP


I came here to state exactly this. As a dutch individual that has 'cloud' high on his CV, I would like to create an account and test this to see if it is something I should invest my time in to make it part of my cv. But ... they won't allow that.

Ah well, next!


Their pricing page is funny. Can I have 2 RAMs please?

My physics teacher would get spitting mad at them for not specifying the unit.

Of course their billing is also 'hours'. Instead of 'hourly'.


The will likely be using Phoenix: https://www.dataport.de/about-phoenix/

The German government developed a fully free and open source solution that is quite similar (consisting of Nextcloud, jitsi, Collabora etc.) and can be deployed using Kubernetes: https://gitlab.opencode.de/bmi/opendesk


It’s kind of a bummer that there’s a bunch of these separate OSS initiatives in Spain, France, Germany and a few other countries..

Instead of everyone rolling their own subvariant of Nextcloud, Matrix, LibreOffice etc, imagine if there was an EU mandate set up and some serious money was budgeted to it.

You’d get a triple multiplier of more budget, less duplication of effort, and a self-reinforcing cycle where more use means the product becomes better through contribs which makes use even more attractive.


Speaking from the Matrix side: it would definitely be preferable to have a large reliable source of funding for public sector Matrix deployments rather than juggling between loads of small overlapping ones (especially when they end up failing to route any $ to the upstreams). That said, Schleswig Holstein is looking to route $ upstream in this instance.


This is a mistake at this phase. Right now, we europeans still didn't figure out what is the best way, so each country tries something different. With time, failures will be replaced with winners.


but ours is better! also we need our own national standards no body else uses!


Yeah, that must be why it's been such an overwhelming success story ever since Munich started it in 2003.

/s, in case anyone wondered.


You are /s'ing, but the fact that it didn't spread out of Munich until 2020 is a success in itself: the failure got confined to a very small part of Europe, instead of getting deployed in the whole Bavaria, Germany or even Europe. Damages were limited, they had nearby neighbors to compare, they did the partial rollback (losing "only" 90 million instead of 900), analyzed the failures and fixed a lot of them just by going LibreOffice instead of OpenOffice.

The second attempt they had better outcome, and the German government agrees. Now they have the know-how from Munich, and can have a degree of confidence that things will be fine. You don't need an overwhelming success, just be on par with the Microsoft Office solution using free software, and you are already ahead. It's Microsoft who needs to be clearly ahead to justify the cost.

Still, it's better to have it limited to Germany while others watch for a year or two if it works as intended.


You are underestimating German federalism - there are several competing providers for each specialized software used by German municipalities, often with huge customisations. There is no way you could mandate a specific product. What can be done from above, in some cases, is specifying requirements (for eg data storage, privacy laws, code auditability...) and APIs (like APIs to federal agencies) and let the customers and providers create solutions. It might be messier, but in the end you are not dependent on a single product.


I think I want less bureaucracy in my development so happy to see multiple versions and letting them battling them out


While your small projects with next to zero market share fight it out between themselves, Microsoft keeps swallowing more market share making more billions.

Who do you think is the real winner of your sandpit fight?


Keeping the separate projects at the national level also precludes the furious lobbying efforts against it that could target members of the union with... "friendlier ears". It also allows for smaller scale testing to work out kinks, in the same way that some US federal law started as laws in specific states.


You are right, we need to make a bloated project and need to pour in millions into it like we did with Gaia-X. Then it will be a big trash-fire with smoke visible from afar and not just some small sandpit.


Until you can bribe officials to the tune of billions the quality of your software won't matter. The only reason why ms keeps winning is that it keeps bribing.


MS also keeps winning because they have billions of dollars to pour into the UX of their Office suite. The FOSS community doesn't spend a lot of money on hiring professional UI designers, and it's "do-ocracy" doesn't handle UI design well because UI design isn't seen as "doing" so much as nitpicking the choices of the programmers who are "doing all the work".

MS bribery is a factor, but if we use it as an excuse to ignore our own faults as a community then we're just screwing ourselves over.


I've never met anyone who said 'yes, the new change in UI for office is amazing, I'm so happy all my memorisation of UI elements is now obsolete'.


I don't care how many upvotes the parent comment has, it needs more.


But isn’t this the very centralisation we’re trying to avoid?


Centralization is about infrastructure, so no. This would be a monoculture, assuming nobody forks. But if it's a FOSS monoculture and nobody forks, that implies that everyone is fairly satisfied with it.


I didn’t think that centralization itself was the problem… just where the centralization was occurring (outside the EU).


Not if everyone runs their own servers.


When it's open source, forking / moving is always an option and you're not vendor locked or being charged exuberant fees with no other option.


And of course it would have to be OSS to be in any way justifiable, that public money is put into it.


State mandated projects give you things like the Lidl AWS "alternative" who's pricing page is a downloadable PDF.


The PDF makes looking at the prices much easier.

But if you prefer, you can also see it on the website, for example: https://www.stackit.de/en/pricing/cloud-services/iaas/stacki...


Do you have a specific critique of the StackIt cloud that makes you put "alternative" in quotes? Do share if that's the case.


"The pricing page is a PDF" is right there in my post. I can get deeper but as we say in my language, "for those who want to understand, half a word is enough".


> I can get deeper

Please do.

I can't say I'd trust the opinion of someone that gives "price page is a PDF" as the only reason for invalidating a cloud offer. :)


You didn't explain what's wrong with having the pricing page as a PDF. WTF is supposed to be wrong with that???


Can they fax me the pricing page?


> imagine if there was an EU mandate set up and some serious money was budgeted to it.

We saw what that looks like already. "Gaia-X" it was a huge clusterfuck and burned to the ground before it even began.


Yeah, then they could use it in... Jorvi. ;-D

Could be familiar to you, but if not: https://en.wikipedia.org/wiki/Jorvi_Hospital


Very typical for Germany, they rather roll their own solutions than cooperating with other countries and pooling resources. But it's even worse than that, each state within Germany does their own bureaucracy differently as well, to varying degrees.


Coming myself from MySQL to Postgres I found PgAdmin (https://www.pgadmin.org/screenshots/#7) easy to use


Pgadmin has gotten lots of bad feedback. I'm using dBeaver or the IntelliJ sidebar to connect to the database.

Dbeaver is more useful. But you can do everything from the command line. You use sql to write database migrations anyway, no?


I guess it's better now. Last time I tried years ago it was horrible


> In a follow-up post a day after his initial Tweet, Johnie noted “inaccuracy in the ASUS router tool.” Other LG smart washing machine users showed device data use from their apps. It turns out that these appliances more typically use less than 1MB per day.


Other users having low data usage doesn't prove they also have low usage?

Are people so uncomfortable saying "we don't know" that they use such loose reasoning to "get the answer"?

I'll re-read the article/tweets, maybe I missed something.


> Other users having low data usage doesn't prove they also have low usage?

The follow-up Tweet is from the same person who reported the initial issue.

This happens a lot: Devices get a new IP address but some tool has an internal database that remembers the old device at that address. It then shows stats for the new device at that IP address but reports it as coming from whatever it was initially recognized as.


Why are you ignoring that the person who made the original claim about 3.7 GB of data per day stated that it could be an accuracy issue with his router?


My Ubiquiti UniFi UDM is not good at device identification. It's kind of annoying because I have this big list of devices on the network and it's peppered with devices that I know don't exist. I'd appreciate it if it said something like "Maybe iPad Air", instead of just "iPad Pro 2nd Gen" when I know no such device is on my network.


ios devices (unsure about android) use random MACs on wireless networks by default.

https://support.apple.com/en-us/102509


The random MAC is generated only once per network, and re-used for every subsequent future connection to it, until the network settings are reset


The random MAC would still be within the vendor prefix though, and a MAC address won’t identity a specific device type anyway.

Edit: I’m wrong


No it isn’t, vendor prefixes are sort of an anachronism. Bit 41 - bit 1 of the first octet is reserved for local (random) use. That and the group bit (40) set to 0 means the second digit of the human readable MAC is 2, 6, A or E, but that’s it.


My bad, thanks for the correction! You still can’t identify a specific device type based on the MAC address though, right?


On the same WiFi network yes you can - it uses the same MAC on the same SSID. Remembers the "random" MAC after the first connection (and if you first connected to the networks before they added MAC randomisation in iOS14, it "remembers" the actual MAC of the device, so you didn't have compatability issues after the iOS14 upgrade).

So you can't use it to track devices between multiple SSIDs including when scanning for networks, but you can use it to persistently identify a device when connected to the same network.


You misread the question.


Yep, you're right. Agree with the other post - the randomly generated MACs have no manufacturer info.


Other than perhaps the manufacturer from the OUI, no.


There’s no manufacturer in a randomly generated local OUI.


Random aside for this, I believe this functionality existed for many years but actually hasn’t worked until recently. (Take this with a grain of salt)


I had to turn random MAC off, my google mesh could not handle it. Wifi on my samsung phone would only work for a couple of minutes.


Most personal devices now use randomized MACs so it's hard to ID them.

You can go via IP though, pull up your DHCP lease on your phone/laptop/whatever and match it to the same IP in Unifi, then manually name the device.


Do the UniFi products just try to use MAC addrs for this, or do passive/active TCP/IP fingerprinting?


I think it’s fingerprinting - if you look in the logs it gives you certainties of different devices that it thinks it might be


Just MAC registrations.


Occasionally it's fun to discover new devices.

"It thinks my WiFi dog feeder is a Technoelectrocom 56XR-2000? What the hell is (was) that?"


I found out that unifi plug in doorbell chimes use an esp32 this way because I saw one on my devices table, before it had booted fully and handshook as it's real Identity


Do you mean the non-PoE version? I have the PoE version sitting in a box somewhere and I'm a ESP32 enthusiast so I'm wondering if that's what I'll be doing today. Surely they're using it just as a WiFi coprocessor? Or...?


The non PoE one, the one that just plugs into a normal power outlet

My PoE one has only ever identified as what Ubiquiti thinks it is, so no idea


The traffic stats my UDM shows are complete fiction. The data it presents makes no sense.


You can fix this by setting fixed IPs. But yes Unifi is not great at this.


Cisco ISE thinks all iPhones are FreeBSD.


That’s because your profiling is not set up correctly


I didn't set it up. :)


Note: that was inaccurately reported in the TomsHardware article

The fact that it shows up as using iMessage is the part that I said may be inaccurately reported.

Even now, I am still seeing some suspicious data usage. I’ve started wiresharking it yesterday to track it down.


A megabyte a day still seems excessive.


If you agree to some form of anonymous tracking for diagnostics I can see 1mb being reasonable. This would be periodic update on things like usage levels, part quality, etc.

Most likely that tracking acceptance is buried in some 500 page eula, but thats a separate issue.


Why do you need an entire megabyte for that? Even if you did laundry five times a day, it shouldn't take more than a few bytes to store a few metrics.

Even if you're lazy and uploaded an uncompressed JSON array of objects, that shouldn't be more than a few kB. Way less if you compress it.

A megabyte is a LOT of data.


I could easily see it measuring the forces and weight on the drum every 5 seconds (or even every 10 ms) during the whole wash, to be able to produce charts of vibration patterns, that engineers could use to correlate with failure. Remember -- when you're spinning at high speed to wring out the water, it's actually some pretty crazy strong forces.

Or other things measured every ~second, like stuff related to the motor, temperatues, humidity, etc. and other diagnostics.

Seems really easy to generate a megabyte if you consider time series. Even easier if it's in XML or JSON rather than a CSV.


When I worked in a storage group years ago, the system that controls swipe card access for a building generated something like 3TB of Java exceptions a month.

Because of the criticality, it was on high tier reliable SAN storage, replicated to a second site. IIRC, storage was like $80 Gb/mo.


Love how you said exceptions, rather than just logs. I am unfortunately (painfully) aware of exactly what you mean. The storage costs were the cherry on the top. But seriously, $240k and nobody raised a stink?


It’s one of those things that cloud storage helps with.

Because it was on prem, the chargeback model was associated with the business unit and not super granular. It got lumped in with another business function because it should be a trivial workload.

I found it when I was doing estimates for a new platform and the app’s growth numbers didn’t add up! Even at $240k, it wasn’t an obvious outlier.


You really think they're going to measure all that, upload it, send it to some expensive engineer, have them try to physically model the error, and then... what?

That might make sense during development, but there's no way they do that in a consumer product. If a part breaks they're just going to send out a replacement or the repair guy is gonna get some third party part. Recording that much detail would just be noise.

Even if they had specific parts sensors (doubt it, for costs), they could just process that locally and send up an error code, not the whole log.

I find that all pretty hard to believe, but if anyone has evidence to the contrary, I'd be glad to be proven wrong. I had a LG washing machine bought new a few years ago promising all sorts of bells and whistles and app integrations. But it was super janky and cheaply made, the app integration was terrible, the on board memory would lose its configured settings, the entire LCD broke after a few weeks... it was not what I would consider well-engineered at all. If it was sending a megabyte a day I'd just assume it was yet another bug, not some forward thinking QA.


I would suspect they measure everything and use almost none of it. Like most web services. This was a common complaint and low hanging fruit optimization, that people were storing metrics that never got read. Just in case.


It's cute you think there is some intelligence to this design... it's just whatever some PM/exec dreamed up and some low-level engineer implemented based on requirements. The data is likely just sitting there collecting digital dust.


I'm not really sure what you're asking:

Is he sure they're sending < 1 MB a day? Yeah.

Is he sure it's plausible it's measuring time series data? Yeah.

Is it plausible they measure vibration patterns? Yeah.

It's not about like "oh we'll send an electrical engineer to fix your specific vibration pattern", it's "we can collect data from the field to make forward-looking decisions": ex. maybe we switch supplier mix for replacement motors, and a data scientist ends up finding 6 months later that something changed in June 2023 where serviced washers in New York report dramatically more intense vibrations and now we know to go talk to the supplier who gained mix.

It's error logs for non-tech. YMMV on individual team quality if they actually follow-up. Conceptually, Big Data(tm) is something CEOs have been hearing regularly since, what, 2014? So definitely plausible.


I would imagine that they just log everything. Serial number, temperature, which cycle is used, time of day, how long it takes to fill the washer, how long it takes to drain the washer. Everything. Put all data in a great big database. When something needs to be fixed and is covered by the warranty, mark that the failed part is associated with that serial number.

Then do some sort of a regression to discover what logged parameters are associated with what failure modes/broken parts. If washers that take less time to fill up have higher than normal failure rates for some elbow joint, that probably means that high water pressure causes the elbow joint to fail. If a certain elbow join's failure rate is simply correlated with the number of cycles, that tells you something different. If a certain elbow join has a high failure rate that's not associated with anything, that probably just means it's a shitty part. But you learn something.

By logging everything and running a regression analysis, when you develop next year's model, you know where to improve. Now when you tell an expensive engineer, "This elbow join failed on 1000 units of revision F. Make it fail on 100 or less units of revision G." you can also give them a starting point to work with.

I'm a software guy. If I get 10 crash dumps, and you don't tell me anything, I don't necessarily know what to work with. If you give me those same 10 crash dumps and tell me that 9 of them had the language set to Arabic or Hebrew I know it's probably a BOM bug. Same thing.

Or you just sell the data to ad companies and let them figure out how to get value from it.


Well if you claim warranty, I'd expect them to want to have that data.

Maybe they also just sell it together with your advertising id, why not use washing patterns for deanonimyzation ...


If they're not doing that they should, albeit finding a way to make the additional cost minor.

Collecting all that data for analysis would be incredibly valuable, especially considering the wealth of analysis tools today.


"Even easier if it's in XML or JSON rather than a CSV."

Yes indeed, but compression algorithms are not that new.


Sure, but it's also easy to imagine an engineer just forgetting to or not bothering because it wasn't in the spec.


> Even if you're lazy and uploaded an uncompressed JSON array of objects, that shouldn't be more than a few kB.

See, that’s just one lazy engineer writing one telemetry solution. Multiply that by several engineers across several teams, each cobbling together a different telemetry solution for a different product manager’s initiative using a different stack of JavaScript libraries, throw in some poorly-rolled-out infrastructure changes a few years later resulting in some unanticipated retry loops, and I think you can hit that megabyte per day easily enough.


> A megabyte is a LOT of data.

Well it is, but in this Cloud Native world the clueless management and IT engineers have been convinced that single micro service running on 50 kubernetes pods and generating 20 MB trace logs for single transaction is normal.

Now once we have built this inefficiency industry wide nobody is there to wake the management up about huge wastage of resources. They are floating in this lurid dream of "ultra smart" machines generating gigabytes of precious intelligence about customer behavior for target ads


It's 12 bytes per second, or less than 1kB per minute. Doesn't seem like much.


For telemetry on a washing machine, it is enormous.


That's WiFi/Bluetooth signal strength mapping amounts of data.


It uploads a novel a day. That’s a lot!


It's probably a single probe packet once a minute.


Which is likely what is happening here. The LG "ThinQ" washing machines do allow for remote starts: https://github.com/ollo69/ha-smartthinq-sensors/issues/234

> After 10 minutes (IIRC) in remote start mode without starting the machine goes to sleep. You must use the smartthinq_sensors.wake_up command to wake it up, then the remote_start command to start it.


Just checked mine which is used all the time and it's about 1.5mb per week.


It seems completely inconsequential


It seems completely inexplicable.

I don't care if it's a small percentage of my symmetric gigabit fiber, I only care why they supposedly need it and where it ends up.

A phone number or a timestamp is a tiny amount of data.

In the quaint olden days, you had to go out of your way to volunteer to be a part of some study to have any aspect of your activity recorded every few seconds 24/7 to be collected and analysed like that.

It also doesn't matter that my washing machine usage might not seem like sensitive info. It's wrong by default rather than OK by default. You need a compelling need to justify it, not the other way around. It needs to be necessary to prevent one dead baby every day or something, not the other way around. No one needs to produce any convincing example of harm for that kind of collection to be wrong.

But even so, it's easy to point out how literally any data can end up being sensitive. Washing machine usage could lead to all kinds of assumptions like, this person does not actually live where they say they do (which impacts all kinds of things), or this person is running an illegal business, or housing too many people for the rating of the dwelling or allowed by the lease, etc, or just another data point tracking your movements in general. Or even merely to get out of 10% of warranty claims.


The users did go out of their way to volunteer, by hooking the washing machine up to their network.


They did not. They went out of their way to buy a washing machine and maybe use some monitoring or alerting feature it offers. I decline to believe you do not know this.


War and Peace is 3mb as uncompressed plaintext[1]. 1mb a day is a lot.

1: https://gutenberg.org/ebooks/2600


Would you prefer to read War and Peace, or the (shorter) washing machine logs?

It’s touch and go for me. The variables names in washing machine code would likely have be less easily confused.


Ah! I should have read a little further. I apologize!


From my understanding they mean the code was generated by instructing OpenAI’s ChatGPT (contrary to writing the code themselves).



Thank you


Last discussion (4 comments):https://news.ycombinator.com/item?id=37127673

Link to comment from the original thread where you can find an archived version of the repository: https://news.ycombinator.com/item?id=37127061


The poster might be referring to some of the additives used in the making of American bread: https://www.theguardian.com/us-news/2019/may/28/bread-additi...

Now a comparison of US vs. EU cases of early onset colorectal cancer would be required.


US not in the top ten, Eastern Europe overly represented

https://www.wcrf.org/cancer-trends/colorectal-cancer-statist...


Maybe smoked fish, processed meat sausage and blood sausage (high haem) in Scandinavia and E. Europe.

For bad diagnosis and treatment, Barbados, Samoa and Singapore seem to be outliers.


> consuming fish might decrease the risk of colorectal cancer

Doesn't seem to help Norway, Portugal, Japan, Croatia, or Denmark.

Too much oil, not enough grain?


Complete hare-brained theory: Countries that eat a lot of mussels and oysters.


I would say in E EU white bread additives are not as much a problem as is alcohol and red meat.


I don't lend that Guardian article much creedence- it uses all the standard journalism/public health tricks to make its argument.

"Potassium bromate, a potent oxidizer that helps bread rise, has been linked to kidney and thyroid cancers in rodents."

if something is "linked to ... cancers in rodents", that tells us basically nothing about its safety in the industrial process and consumption by humans.


> the additives used in the making of American bread

...but far from all US bread. You tend to find the more questionable additives in the cheapest store-bought breads. But it's also not hard to find higher-quality bread that omits them.


It is Meta‘s new competitor for Twitter


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: