Hacker News new | past | comments | ask | show | jobs | submit | arshxyz's comments login

They fixed the issue but for a beautiful moment in time, it looked like this: https://archive.is/LPC5O


Looks like my old MySpace


"Nah the build quality is great but I just can't handle a functioning notification system that doesn't come with spam"


That is clearly an altered image. The padding is inconsistent and the border radius on the highlighted text is wrong


Half of these are fake/satire. Look at the first one where the highlighted text colour should be white and not tinted red.

The cp3 one has inconsistent padding. The astronaut one is clearly inspect element. I could go on


It is compatible with ollama. Advanced settings -> "URL for AI API"


> HTML to awkward so you have to switch to webgl which is too low level or you might switch to svg

HTML is replaceable by WebGL which is replaceable by SVG?

> JavaScript that compiles down into typescript

JS compiles down to TS?

I am not sure if anything in this comment indicates you have any experience with developing for the web but that is not my primary issue with this comment. My gripe is claiming that things are "awkward" or can't be done without even looking around. None of the examples below use WebGL

Here's a WinXP UI clone that runs in your browser with floating, resizable windows. - https://winxp.vercel.app/

Here's one that mocks macOS - https://macos-web.app/


>JS compiles down to TS?

It's called a typo.

>I am not sure if anything in this comment indicates you have any experience with developing for the web but that is not my primary issue with this comment. My gripe is claiming that things are "awkward" or can't be done without even looking around. None of the examples below use WebGL

I think you're smart enough to recognize it's a typo. The primary issue with your comment is hiliteing issues as if it weren't typos but actual lapses in experience. It's like saying someone misspelled a word and claiming they therefore have no experience with the English language. If someone did that, the intent is 100% malice, not a misunderstanding. Don't be malicious. Don't debate and use malice to prove a point. What you should do is Make a point and change your stance based off of evidence.

Also I never said it can't be done. I said it's awkward. It certainly can be done.

WebGL is low level. Same with WebGPU, it means you have to use shaders. Godot has a library for UI. Which makes it less awkward. I'm saying it's JUST as awkward to build these things with webGL then with other web front end technologies.

>Here's a WinXP UI clone that runs in your browser with floating, resizable windows. - https://winxp.vercel.app/ >Here's one that mocks macOS - https://macos-web.app/

What do these examples prove? That it can be done? Did I say it can't be done? Again I didn't. It's obvious these examples exist, everyones seen them. Heck you can probably do the whole thing with CSS and no JS. The gist of my comment is that it's awkward to do, not that it can't be done. You can construct a house out of toothpicks and glue. I'm sure it CAN be done. But bricks are the less awkward tool.


I'm another person who read your comment who doesn't like Electron/web dev and, while reading your comment, I thought "does this guy even know what he's talking about?".

So as someone aligned closer to your team (preferring other toolkits to Electron) I don't think the other user who replied to your comment previously was being malicious in interpretation.


So did you think i actually meant that Javascript compiles down into typescript?

I'm sorry but that comment alone makes me think malice. It's too obvious. To each their own.


Yes, I did and I was confused (hence my thought it was someone focusing on things other than web dev that made the comment).

It's not like I'm attributing ill intent now that you've clarified what you meant (I mentioned having the same anti-Electron inclination as you to demonstrate that) but I probably can't change your mind if you think two people's misinterpretation of the same comment came from malice.


Such typos are common and obvious.

It's more likely someone utilizes such errors as ammunition. Such an action is quite common. It's obvious to me that this is the more likely scenario here from the other replyer.

Another common thing is to team up with people who have similar intentions. When such a situation happens they synchronize their biases. Two coworkers with the same intent will synchronize their biases in a meeting in order to push their intent... Happens all the time.

You like the front end, my guess of what's going on with you is a team up of sorts. Malice is too strong of a word for what you're doing. You're doing a team up and you're adjusting some obvious biases to favor your intent and create alignment with your team mate. This is all done subconsciously too. But that's just my guess on what you're doing.

These are all obvious human behaviors that are quite common but people don't think about it until it's explicitly pointed out.

But in the end. Who cares. We both are now clear about what's going on. No point in debating this further.


Google did make something like this: https://www.google.com/advanced_search

It has a lot of options, even allows you to search ONLY the Text/Title/URL and you can filter by usage right (free for personal/commercial use etc)

Interestingly it was last updated before material design was a thing. Maybe it's nostalgia but I love the design language of Google had before material injected padding everywhere.


That's a good point about padding. In trying to make their UIs "cleaner" they've made content impossible to fit unless you're maximized on a 27" screen (which is presumably how they develop)


This is likely very paranoid behaviour but I recommend downloading all your Google data from time to time: takeout.google.com


This is absolutely not paranoid and everyone should keep local copies of any data that is important to them in any way.


If you aren't backing up and restoring your data, you aren't storing your data.


This works fine for smaller accounts, but on larger accounts it seems to be regularly failing (based on my own experience and based on other postings and reports that I found online).

Exporting my Google Photos sometimes fails consistently even with lots of attempts. Out of well over 10 export attempts or so this year maybe a single one succeeded. I have a few hundred GB of data stored on that account. I also currently have a support ticket with Google open on that issue, but after initial follow-ups haven't received a response in a couple of months now.

That said my current approach for backing up things is to upload an "age" encrypted version of the data from Google Takeout to Wasabi. Once uploaded I run a script that shows me the diff between the data sets (so that I can ensure that no old data went missing that shouldn't have gone missing) before I delete older data. Probably not the most optimal approach though. Might be better to just set up some versioning layer on top of Wasabi and to keep deleted or modified data forever.


I was trying to set up a similar thing. I already do the google takeouts every few months (~400GB, I don’t have issues with export though), but so far have been storing all of them.

How do you do the diff between the old and new encrypted versions? Do you encrypt and upload the takeout .tar.gz files, or do extract first then encrypt?


My personal Internet connection is a bit too slow to wait for re-uploading all the data and my vserver doesn't have enough disk space to temporarily store all the data so I pretty much do everything in a streaming fashion: I use a Firefox extension that gives me the wget command (which includes cookies, etc.) when triggering a local download from Google Takeout, then I patch that command to stream to stdout, this first (tee-)pipes to a Python script that decompresses the data on the fly and dumps the hashes for each file into a log, and it also goes to "age" for encryption, and then to s3cmd for uploading the encrypted data to Wasabi.

For the comparison I pretty much only use the logged hashes which allow me to figure out if any hashes (and associated files) are missing in the new version of the backup. This isn't a perfect solution yet as a few things aren't detected. For example Google Takeout bundles mails in mbox files and I currently don't check for missing mails. It would be better to convert the mbox files to a Maildir first so that the comparison can be done on a per-mail basis.


100% I do this yearly at tax time when adding a bunch of new docs.



my colleague just asked if there is a way to query chatgpt using DNS :) :)


oh, thanks! I was just going to ask this link


The devs behind tldraw also played with this to create something cool - https://twitter.com/steveruizok/status/1727625036159234555


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: