I found that liquibase doesn't really support rollbacks, particularly with mysql as you can't do transactions for schema updates, if the migration fails in the middle of an update it just gets left in a half-updated state.
Liquibase does support rollback DDL with MySQL; I used it.
I put each DDL in a Liquibase changeset with a corresponding rollback DDL I constructed by hand. If the Liquibase changeset failed, I could run the rollback for all the steps after the "top" of my wish-I-could-put-them-in-a-MySQL-transaction operations.
But you are right MySQL itself doesn't support transactions for DDL and that is true whatever tool you use.
It is true that if you put multiple SQL operations in a single Liquibase changeset that are not transactional you can't reliably do rollbacks like the above.
It is also true that constructing an inverse rollback SQL for each changeset SQL by hand takes time and effort particularly to ensure sufficient testing, and the business/technical value of actually doing that coding+testing may or may not be worth it depending on your situation/use-case.
I tried using wanderlog on a recent 4 month long trip and it became totally unusable after adding maybe 2 months worth of things to it. It has some really baffling UI/UX decisions too, like not showing the address of accommodations, just the name. Really made me feel like no one working on it had ever tried to use it when actually traveling.
They had also gone big on AI slop for recommendations which made it really hard to trust any advice in the lists of things to do.
I think these sort of travel apps are really hard to find an audience for. Very very few people travel often enough to pay a subscription and it is hard to justify spending significant money as a one-off purchase when traveling is already so expensive.
I'm surprised that more comments aren't mentioning this. I've recently moved from a stack managed by AWS CDK to one managed by Pulumi on GCP and the difference is stark. The building blocks that AWS provides allow spinning up complex services with relatively minimal configuration. The pulumi code required to create a simple GCP cloud run service with IAP behind a load balancer is literally hundreds of lines.
Plus GCPs documentation is far far worse than AWS's.
This is great, I don't have a concrete use-case right now but can definitely see myself returning to this in the future. One thing that would be handy is having a way to either accept a ffmpeg cli command, or convert from a cli command to the typescript syntax. My experience with ffmpeg if you often do a lot of copying and pasting of commands from documentation or random guides, it'd save a bit of time if you didn't need to transcribe the commands into streampot's syntax.
Chat GPT has been a game changer for my ffmpeg usage. Instead of cobbling together commands from StackOverflow questions and guides, I just describe what I want in plain English and it's got a pretty damn good success rate for giving me what I need.
FWIW I found the whole video quite interesting, I had never really considered that there could be sound recordings from before anyone had thought of a way to play them back.
Though I do remember an old mythbusters episode [1] where they tested whether it was possible for audio to be "accidentally" recorded on a pot when a piece of grass happened to mark the pot while spinning.
I don't think this was a real myth. This was an X-Files episode in which a clay pot that has been molded while Jesus was ordering Lazarus to rise from the dead could be used to bring other people back from the dead by playing back the recording. If I'm remembering correctly, even in X-Files this was actually a hoax.
That X-Files episode may have been inspired by "Time Shards" [1] by Gregory Benford, a short story first published in 1979.
TLDR: Too late to be included in the bi-millenium vault, a Smithsonian researcher discovers an audio recording accidentally inscribed on a c. 1280 pot by a pointy tool cutting a decorative spiral. After listening to the banal conversation recorded on the pot, the researcher wonders about the contents of the vault to be opened in a thousand years: “What makes you think we’ve done any better?”
As so often, Daedalus (David E H Jones) got there first with one of his semi-humorous articles in New Scientist in 1969 - one of those collected in "The Inventions of Daedalus" in 1982.
This is great, I've been meaning to build something similar for some time. I tried to run the queries from the tutorial but hit lots of CORS errors loading the datasets, is there any way that you have found to work around those?
Yes, unfortunately if the "foreign" sources don't support CORS, you'd have to use a CORS proxy... If you want to self-host, there's one at https://github.com/Zibri/cloudflare-cors-anywhere that can be deployed to CloudFlare Workers (the code is a bit messy though).
GitHub supports CORS for raw data for example, that's why I put it in the sample queries.
Have you tried getting in touch with GCP to see if they would refund the charge? I've heard plenty of stories of cloud services refunding large one-off accidental spends like this one.
"Last week I ran a script on BigQuery for historical HTTP Archive data and was billed $14,000 by Google Cloud with zero warning whatsoever, *and they won’t remove the fee.*"
As I was reading this I was thinking to myself "I wonder if it is grammarly related" because I experienced a bug some time ago that presented itself in a similar way. It was impossible to reproduce but affecting lots of people internally within certain departments. Eventually we figured out the thing they had in common was that they had the Grammarly extension installed.
The other key thing was that the bug only appeared on our staging preview urls, not on the live website. It turned out it was because of a bad regex in the grammarly extension that caused the page to hang if the domain name was more than about 100 characters. Our staging domains were pretty long, I think they contained a few subdomains and had a job number or something in there.
This one is more crazy though if it is really caused by the desktop app - that's pretty scary!
I was so disappointed that the story ended with we can't look inside Grammerly or Chrome to know why the gif decode/rendering causes it to crash. This isn't interesting at all. Many problems get narrowed down to some combination but not knowing really why is unsatisfying.
> It turned out it was because of a bad regex in the grammarly extension that caused the page to hang if the domain name was more than about 100 characters.
Just today I debugged a regex that would DoS our backend whenever the user enters the wrong thing in a form.
That was a great read, but there was one thing I didn't understand: Why would the regex string have "." twice in a row? What does ".." find that "." doesn't find? Does that just mean "at least two characters"?
Holy shit. I had a similar thing happen with some web based video surveillance software maybe 5 years ago.
A manager of some sort had his aging laptop replaced due to a company wide Windows 10 upgrade project. Super friendly older guy, probably in sales. IT went through all the procedures of inventorying software and network needs, backing up user profile and docs, etc. Great processes in place. I remember this because I saw the device assessment and it was like a 10 year old Thinkpad with 4G of RAM and a note saying he had to keep it plugged in at all times or it would shut down. Who puts up with that? Patience of a saint. Anyway.
Laptop was deployed by onsite IT to verify everything was gravy. All checked out except for Grammarly. License didn't get transferred properly or something so they had to put in a request to get his licensing working.
Fast forward a week and he gets his license key and Grammarly is tested good to go. He's checked off the list.
Later that day we get a call about not being able to see security cameras because the web page is crashing. Helpdesk tries the basics, reboot, clear cache, reinstall browser, rebuild profile, etc., nothing works and it gets escalated to me. I check the network, firewall logs, log into another PC, onsite, off-site, etc. All working for me, no one else having issues.
I tell him "I'm completely baffled here, have there been any changes lately? In your office? With your PC?" He jokingly says "Well yeah they installed Grammarly today maybe that's it?" We both laugh and I say well, I'm literally out of ideas, fuck it let's try it.
I remote in and uninstall Grammarly. "Ok go ahead and try the cameras lol". I then watch him open up Outlook, go to a folder named "Cameras", and open an email with a link to his cameras "home page". It fuckin worked. I turned Grammarly back on and clicked the link and sure enough it failed.
I made him a browser shortcut, moved his "email shortcuts" into his browser, blew his mind, and closed the ticket, but it definitely bugged me.
This tracks because it was some very dated camera software (you'll know what I mean if you've seen it) and the link was to his customized homepage with a super long php (or something) generated url. He was the only one at the site with Grammarly as well so it was the only time we saw the problem.
Thank you, I can finally close this cold case out in my brain.
If a website bug is not easily solved, first order of troubleshooting is to disable all extensions. Devs don't often think an extension could be causing the problem, but extensions can do wild things to a webpage. I've caught a few bugs caused by extensions this way.
"Hunch #2" in this article is about extensions causing problems. That is what my comment is in reference to, but maybe you didn't read the article.
Sorry but there is no rule that every comment made about an article must be specific to the outcome of an article about troubleshooting. I can comment about other things covered in the article too.