Hacker Newsnew | past | comments | ask | show | jobs | submit | gvkhna's commentslogin

Unfortunately most of the CPUC worked at PGE, the people that understand energy regulation are usually energy folks. And so the CPUC is typically quite understanding of PG&E’s pleas, they approved every single rate hike they’ve proposed. 5 times last year alone.


Is anyone running truenas scale for this kind of purpose. I haven’t used it but its architecture around k8s seems extremely promising. For most use cases a simple docker container is all you need but sometimes running other apps like grafana with a k8 manifest is easier to manage in one vps and gives you the flexibility of a cluster. Just curious.


Yes that works for most use cases but there are use cases where you may need to store or shuttle the time zone. For instance you want to know this UTC timestamp was originally created in PDT. You would have to store two variables. Most other languages have this functionality it can be useful and is good to have, probably only needed by Jedi’s too.


The article gives an example, of you buying a coffee with your credit card while travelling to Sydney, and returning to Madrid, and a few months later seeing a charge for 3:50 AM on that date...

Google Photos also get confused with "When was this picture taken?", my older model camera just stores the EXIF date in "local time" and I have to remember to change its timezone when travelling, and if GPhotos can't figure it out, it might show pictures out of the airplane window, and then the next series of pictures are from the departure airport because they're from a "later" hour (since it's missing the timezone info).

I suppose I could keep it at my home time or UTC...


And if your spouse is checking the charges from Madrid, they probably want to see it in Madrid time. There is no single correct answer.


Exif (fun exercise: find out who specifies it and when it was last updated!) actually didn't even have a way of storing timestamps in UTC or with zone information until fairly recently: It was all local times without zone information.

I've seen some software work around that by combining the local date/time with an embedded GPS tag, if present, which does contain the time in UTC.


Ironically, time zones are a hack with bad precision about the position of the sun in the sky. The GPS coordinate side steps the nonsense (can be converted to the time zone as defined in that place at that moment).


That is how I would expect a bank statement to read though. I would find it infinitely more confusing if I bought something online in my bank showed the time of wherever the seller was located.

The photos problem is harder, but the app needs to just convert it from local time to UTC when you import it. There's not much that can be done if you take photos on a camera with a different time zone than you're in without more metadata.


You'll find that most bank systems avoid any notion of time precision higher than calendar days for a variety of reasons :) As a side effect, this practice conveniently avoids that problem entirely.

> That is how I would expect a bank statement to read though. I would find it infinitely more confusing if I bought something online in my bank showed the time of wherever the seller was located.

When using my banking app abroad (one that does show timestamps), I'm usually much more confused by their presence than by their absence.

> The photos problem is harder, but the app needs to just convert it from local time to UTC when you import it.

But I usually want to see the time in local hours and minutes for photos! Sunsets, new year's fireworks etc. happen according to local time, not UTC or my current timezone's offset.


Yeah, storage needs to be implemented in UTC and display needs to be in local time.


But which local time?

Sometimes, the local time at the place the photo was taken can make more sense, but it's not a general rule.


Yeah, these are workarounds we have to use because many pieces of software weren't implemented fully...


I have always wondered why breaking the timestamp (in UTC) and the timezone into two separate data points and storing both is not the accepted solution. It appears like the cleanest solution to me.


You can't accurately convert future local times to UTC timestamps yet, as that conversion changes when timezones change.

Let's say we schedule a meeting for next year at 15:00 local time in SF. You store that as an UTC timestamp of 2025-08-24 22:00 and America/Los_Angeles timezone. Now, imagine California decides to abolish daylight savings time and stay on UTC-8 next year. Our meeting time, agreed upon in local time, is now actually for 23:00 UTC.


Wow thanks for sharing this, this certainly is a use case not covered by the system I proposed. I imagine this will require versioning of the timezone so we can translate our America/Los_Angeles v1.0 timestamp to America/Los_Angeles v2.0.


Two different implementations might make two different local times out of that, e.g. due to not being aware of changing DST/timezone policies. Hence the recommendation of separating between user/clock/calendar time (which must be exact to a human) and absolute/relative timestamps (which must be exact to a computer).


From my experience, it certainly is. Easy to tell time in sequence as well as local time then. When daylight savings hits you can still calculate well, and can quickly get real time for hours worked out drive time for freight across time zones to follow the hours of service rules.


You can make the same meme about this use case, too. One you get to the right, you realise you want two variables for this.


Intel also worked on modems for a long time, ultimately abandoning it and selling it to Apple, who has also not been able to bear fruit with that yet. Modems are hard but Intel with their experience could’ve stuck it out and had a competitive modem chip but instead focused on short term profits which have evaporated now. Note I own shares in both companies.


AFAIK Intel went into modem business with Infineon wireless acquisition at 2015, which was doing fine but dold due to global financial crisis. Intel, as always it does with acquisitions, turned this acquired business into a money pit. They forced all the designs to be ported to Intel fab processes but Intel fabs didn't care about any design business especially a side design (unlike TSMC). Apple leveraged Intel wireless as a bargaining chip against Broadcom and Qualcomm. At the end Apple acqui-hired the people from this unit. I see that Apple is much more serious than Intel to get it done.


Thank you for your contribution. This is a great piece of FOSS!


This looks great!

I’ve done something similar with tippecanoe and mapshaper from gis files. That allowed me to use mapbox.js with my own hosted custom maps, as flat files. Very fast but still needed to run a server (tileserver-gl-light). This could negate that, very cool!


It does. Recently migrated a project from a tile server to PMTiles, and now tile generation is just part of CI in the kart repo where the shapefiles are edited. CI passes and uploads the pmtiles file to the web server. Completely eliminated the tile server altogether.


Functionally yes but the VRF (variable rate flow) which leads to more efficiency gains is a bit more added on top just an inverter valve. Otherwise the heat pumps would be cycling on and off. That’s also why ACs are typically oversized here in the US. Because they turn on one massive load for a little while then turn it off (cycling). With VRF they’re running steady at a variable output over longer periods of time.


The chiplet design is another innovation like FinFET and even EUV that is allowing for more transistors closer together. The interconnect for instance on the blackwell means no timing issues.

The transistor size is already closing in on 1nm, there will be innovation there eventually, but any step can be a stepping stone. I don’t see the issue here.


Transistors are nowhere close to 1nm right now. Current 3nm stuff is something like 20-40nm wide depending on how you measure it.


This looks fantastic! Does it support node extra ca cert etc? I’ve had that issue with mkcert in the past and it’s easy to fix but another thing to keep track of in these already complex dev setups if you’re doing local https.


I don't understand what you mean, what is "node extra ca cert etc" and what is the issue with mkcert?

Localias wraps Caddy to handle all the cert provision; I believe Caddy uses mkcert. I haven't seen any bug reports about yet, but if you give it a try and run into an issue I would be happy to help fix it.


To be fair protobufs are not purely focused on speed. And what speed exactly? Speed to parse, read, transfer?

Flatbuffers are purely fast read/write by storing buffers that can be mmap’d right into memory. At the cost of some size bloat (slower to transfer over the wire).

Protobufs are a mix of speed but also size over the wire being reduced which does reduce speed of parsing but increases speed transferring over the wire.

I think capnproto is a fantastic project, but considering google is built on top of grpc/protobufs, they will be around for effectively ever which gives it a lot of reliability in my book.


> capnproto is a fantastic project, but considering google is built on top of grpc/protobufs, they will be around for effectively ever

When choosing an RPC interface, we went with gRPC because of maturity and familiarity, not lifetime support specifically.

Building RPC on Cap'n Proto is a bit more barebones: https://capnproto.org/rpc.html

But the library support is getting there (converts a description into code at build): https://crates.io/crates/capnp-rpc

I look forward to trying out Cap'n Proto for RPC.

But I also look forward to try Selium as a message system: https://selium.com/

Maybe in combination.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: