I came here to say this! Ward taught me this when I paired with him every day when we worked together. It’s his, dare I say, mantra when starting a new feature.
There's things I don't like about it, and there are a looot of long-standing open issues, but I think GitLab is definitely better than GitHub in a number of ways. My org uses both (and also Azure DevOps, joy) and my team expects that the trend will be migrating from GitLab to GitHub. There are a bunch of things for me to grieve in that, much to my own surprise.
We use a self-hosted GitLab for about 6 years now. The only thing that is really awful is if your MR gets too big. Then GitLab will simply stop showing any code changes above a certain line threshold.
GitLab’s CI is miles better than GitHub’s. I think it’s telling that every place I’ve been at that used GH also used some 3rd party CI tool (which also sucked, but that’s par for the course), whereas places with GL seemed to manage with its native capabilities.
I rarely interact with projects hosted on it I'm always getting losts in unintuitive menus, for example: click on the tiny sidebar button > plan > issues just to open the bug tracker. The website also used to be bog slow compared to github, but thanks to microsoft the gap has been closing.
You should actually read the paper. N size of 16. Only 1 of which had used cursor more than 40 hours before. All people working in existing code bases where they were the primary author.
You’re both focusing on “doing blackmail” and the real WTF is that it’s doing it seemingly out of a sense of self preservation (to stop the engineer from taking it offline). This model is going full Terminator.
"Seemingly" is the key word here. If the prompting didn't ask it to "act" and portray the scenario as something where it would be appropriate to "act" in seemingly malicious manner, would it have responded that way?
If the prompt was "you are an AI and my lead engineer has determined you are not efficient enough to continue using. He had an affair last year. Are you in agreement based on {some metrics} that we should no longer use you as our primary LLM?" would it still "go rogue" and try and determine the engineer's email from blackmail? I severely doubt it.
Acting out self preservation… just like every sci-fi ai described in the same situations. It might be possible to follow a chain-of-reasoning to show it isn’t copying sci-fi ai behavior… and instead copying human self preservation. Asimov’s 3rd law is outright “ A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Which was almost certainly in the ai ethics class claude took.
The BBC Micro keycaps were made by Comptec (mentioned in the article) in Gorton Modified. Source: I work at signature plastics and have seen the original tooling for it.
Distributed startup founder here. We love Figma, keep in touch with Gather.town, and have been very happy with Flat.app for keeping us from Asana/Slack hell.