Hacker Newsnew | past | comments | ask | show | jobs | submit | more synchrone's commentslogin

If you initiate it - yes. If the seller charges by IBAN via SEPA - you get 8 weeks to chargeback, no questions asked


Aren't IBANs entry only. You can only debit the IBAN.


https://www.onlyoffice.com/saas.aspx - very decent compatibility with Office and speed; is a European company; much of the functionality is open source.


They are using Google Analytics and Tag Manager.


This person had the exact same issue with their child, developed an app for this and got banned from play store: https://habr-com.translate.goog/ru/post/421451/?_x_tr_sl=aut...

The app is available as a standalone APK now: https://channelwhitelist-tilda-ws.translate.goog/?_x_tr_sl=a...

Unfortunately i could not find sources, but it works rather well and has English UI as well


At the very least - LOS has more supported phones.


does it mean you have to put your data into Yandex or Alibaba Cloud if you wanna avoid USG quietly getting it?


Also check out https://github.com/DIYgod/RSSHub, it's aggressively compatible with websites that refuse to do RSS, and has a large community that keeps it up to date


If you have giant migration scripts, for whatever reason, this tool is the only migration engine that would not choke on them.


Really? There are lots of tools out there which just rely on the db to run migrations.


I tried flywaydb, and some others. There are not a lot of migration tools that are not part of a bigger framework.

Can you link some examples that you think would gladly process data-migrations that are 1-2GiB large?


Anything which doesn't try to process the sql file but instead just passes it to the db to deal with, e.g.

    psql -d mydb -f migration.sql
I use a home-built solution that does this (160 lines of go), doesn't seem very exotic to me and did not require lots of work as there are only a few requirements for a working solution: Unique ordered names for migration sql files, use the db to store metadata about migrations run, use the db tools to run migrations.


I searched all over the web for a reliable and scalable plain message queue that could work in a full disk-backed mode, not requiring the data to fit in memory.

Was very impressed with how little RAM and CPU it consumed under load.


Does it offer any delivery guarantees?


As far as I read the docs, if you use 0 for memory storage size - it will flush to disk before responding that the message was published OK.

Delivery is "at least once".


Did you check out NATS?


Was using it in production before. NATS itself is great and reliable.

However disk persistency is done with nats-streaminh and i had big issues with it's Raft file based store, where it would stop accepting messages after a short network issue, and not get unstuck unless I destroy the whole nats-streaming data storage amd start from scratch.

Also raft means 3x disk usage for one queue, since its fully replicated


Both nsq and nats are very memory saving.


Oculus Quest works with stock USB cable. Even via a passive, copper, extender to make it longer.

Oculus desktop software gives you a yellow mark, but games still work.


as far as i can see, RSSHub does exactly that, and supports as many as 536 scraped sources (of varying caliber) at https://github.com/DIYgod/RSSHub/tree/master/lib/routes.

It's not an outlandish amount of work, if lots of people chip in with their favorite source.


I see what you are saying, but I still think it is a far cry from having content providers simply providing the feeds themselves.

In the same way that I don't think that YouTube allowing users to submit closed caption transcripts, or machine generating them, any substitute for the content creator providing them in the first place. I'm sure in the near future, smart TVs will be able to machine generate closed captions from the audio, but I still don't think we should let television producers off the hook for providing captions.

RSS should be the default. And it is not hard to generate RSS.

I happen to think that big platforms only reluctantly adopted RSS over a decade ago because it was a "standard", and because they felt that it was popular enough to justify the traffic from it. But they do not like RSS. It works against their analytics, their ads, and their control of the presentation.

And while it is cool that people are crowd sourcing scrapers, I think the real solution is to promote RSS itself and encourage more platforms to simply provide it. And organizations like Mozilla taking Facebook's position that RSS is obsolete has been profoundly unhelpful to the web.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: