Hugo worked for me. And as part of the GitHub pipeline that builds the site and deploys it I can grab some ‘dynamic’ content (from a Notion DB) and render it. Subsequently I added Zapier so that when the Notion DB changes it triggers the pipeline to update my website. The only thing I pay for is the web hosting with dreamhost.
On my personal site (also built with Hugo) I post images of food I have eaten and media I have consumed. I could use Instagram, Bluesky or X but I want the content to be mine and stay mine. And I am doing it because I like to blog things not because I want the interaction on social media.
Where did you settle on for your image hosting? I like GH Pages and Hugo, but it feels dirty to put images in a git repository. At the same time, GitHub does allow 5GB per repo, so I'm torn.
Yeah, the whole point of this is tracking the progress of something, not vanity interaction. This is why I decided to leave it out of the main RSS feed to avoid spamming readers.
Seems even with the new scanners there is some issue why they have to limit items to no more than 100ml temporarily. Maybe waiting for a software update… though if there is a loophole seems weird to wait until September 1st…
> Technical information received by the Commission and validated by ECAC States and laboratories, shows that the existing configurations of standard C3 EDSCB equipment to which the Commission has granted the ‘EU Stamp’ marking or the ‘EU Stamp pending’ marking need to be revised in order to improve their performance
Sounds like this might be a software problem of some sort, although it’s surprising if that’s affecting multiple vendors. Maybe an issue with the certification process?
Dankeschön! The one I am having some success with at the moment is https://www.herrprofessor.com/en/podcast/ as I can listen to the podcasts on the way to work and he explains things in a way my software engineering brain can follow :) Also I didn’t see at first glance https://www.vhs-lernportal.de/ on your site, which is an outstanding resource for free classes that can get you up to B1 level pretty effectively in my experience.
Previous HN comments indicated this could just be demo snowflake accounts, which were all compromised from a single individuals account at snowflake. But the announcements don’t seem consistent with this. Do we think propective customers really shared 100s of millions of real customer records for demo accounts? Or more likely the sales person was granted access to production systems by the prospective clients, so their credential without MFA could be used to access many customers real data? I struggle to see how snowflake can blame the customer here; secure by default is something a customer should reasonably expect for their money.
I think if it’s one customer you could maybe blame the customer and get away with it. If it’s multiple at once, all those customers very obviously are just pointers back.
My guess is that it went down like this.
Ticketmaster gave access to their production tenant to sales engineer that was probably attached to their account rep. He got an account with a set password, was not onboarded into their Okta/Azure AD/etc and didn't have MFA enabled for his account or was restricted to a range of IPs for access.
He got p0wned and the hackers got in using his creds. Of course he likely had accountadmin or something highly privileged since he was likely routinely asked to look at random things at Ticketmaster... that too didn't help.
From reading the PDF it seems that this ‘merely’ generates tests that will repeatedly pass i.e. that are not flaky. The main purpose is to create a regression test suite by having tests that pin the behaviour of existing code. This isn’t a replacement for developer written tests, which one would hope come with the knowledge of what the functional requirement is.
Almost 20 years ago the company I worked for trialled AgitarOne - its promise was automagically generating test cases for Java code that help explore its behaviour. But also Agitar could create passing tests more or less automatically, which you could then use as a regression suite. Personally I never liked it, as it just led to too much stuff and it was something management didn’t really understand - to them if the test coverage had gone up then the quality must have too. I wonder how much better the LLM approach FB talk about here is compared to that though…
A lot of unit tests generated that way will simply be change detectors (fail when code changes) rather than regression tests (fail when bug is re-introduced). Those are pretty big distinctions, I don’t see LML’s getting here until they can ascertain tear correctness without just assuming good tests pass or depending on an oracle (the prompt will have to include behavior expectations somehow).
This articulates the problem I’m having right now in an interesting way. I’m fine writing unit tests that validate business logic requirements or bug fixes, but writing tests that validate implementations to the point that they reimplement the same logic is a bit much.
I want to figure out how to count the number of times a test has had to change with updated requirements vs how many defects they’ve prevented (vs how much wall clock time / compute resources they’ve consumed in running them).
Brilliant distillation of this insight, I've never heard it put in those words before but it's perfect. It cuts both ways too, if you have lots of tests but most of them aren't really exercising the external API, then you're worse off.
> I want to figure out how to count the number of times a test has had to change with updated requirements vs how many defects they’ve prevented
I did the same some years back in a project that had both a unit test suite with pretty high code coverage, and a end to end suite as well.
The results for the unit test suite were abysmal. The number of times they caught an actual regression over a couple of months time were close to zero. However the number of times they failed simply because code was changed due to new business requirements was huge. With other words: they provided close to zero value while at the same time having high maintenance costs.
The end to end suite did catch a regression now and then, the drawback of it was the usual one, it was very slow to run and maintaining it could be quite painful.
The moral of the story could have been to drastically cut down on writing unit tests. Or maybe write them while implementing a new ticket or fixing a bug, but throwing it away after it went live. But of course this didn't happen. It sort of goes against human nature to throw away something that you just put a lot of effort in.
That’s what I believe Facebook have created here, so you’re right ‘regression’ is a big word - the tests are more likely detecting change e.g. by asserting the existing behaviour of conditionals previously not executed.
And it will lock the system into behaviour that might just be accidental. The value of tests is to make sure that you don't break anything that anyone cares about, not that the every little never used edge case behaviour, which might just an artefact of a specific implementation, is locked in forever.
This is my experience as well. The problem is that persisting "but what _shall_ it do?" on a low level is seen as redundant, as long as everything works. Typically forgotten edge cases are detected elsewhere. The metric _that_ you ran past those code lines says nothing about that you came there for the right reason.
That was brilliantly written and summarised. Seems that Auth0 really did walk the walk in terms of developer experience and support. Thanks and good luck!
None of the reports mention if two stage authentication or any other extra factor authentication that enterprise accounts would be secured with were bypassed too. Am I right to assume that because the attacker had the signing key all of the extra authentication mechanisms that would have been enabled on accounts were bypassed by the attacker (because the attacker could create a token that bypassed all the extra authentication methods)?
And I presume there has been no known dump of e-mails exfiltrated during this attack?
Because it was a signing key that was stolen, the attackers could move straight to the post-authentication phase and forge authorization tokens.
Those email accounts could have had multiple authentication factors enabled, other conditional access policies applied (geo-location, device trust, time of day etc)… all of which were skipped over.
> Am I right to assume that because the attacker had the signing key all of the extra authentication mechanisms that would have been enabled on accounts were bypassed by the attacker...?
https://www.planetjones.net/blog/03-05-2023/relaunching-my-p...