Saying, as a government employee, that “Hey, the systems, processes, and people at the VA are pretty okay” is not discussing the ‘private details’ of his employees in any way that could be considered inappropriate.
It’s not like he revealed details from private DOGE strategy meetings or discussions.
This industry has basically no standards for what a software engineer should know and chronically underinvests in training people, then people will jump on hackernews and bitch about people not knowing what they're doing like they haven't been saying university is worthless for the past 7 years.
I know very well what working with non-10x engineers is.
I also know ver well that performance is a management and HR issue, not an engineering issue.
Your job as a competent engineer is to make your team work. Your responsibility is to help out fellow team members whenever they need to be unblocked, and create a healthy, accepting, tolerant work environment. Your job is to be trustable, not make others hate their job, and not be the toxic asshole who makes everyone miserable and drives others away from your team. Because otherwise it is you who create high-stress work environments, and burn out people.
Complaining about "DEI" is a cope mechanism of incompetent individuals, who prefer to fabricate conspiracies to justify why someone else was chosen over them. It's a rehash of the old argument of complaining about low-pay immigrants for stealing jobs that would otherwise be rightfully theirs.
in the olden days of slashdot, this was addressed by decoupling "insightful" from "funny". you wouldn't get karma for being funny but you weren't punished for it either.
Good point. Yes I think I miss that of Slashdot too. There's such a
thing as a healthy amount of banter, and the good stuff (not overly
sardonic) takes the edge off the doom imho.
there is a world of difference between interacting with three people you don't know for an hour for the explicit purpose of stress testing your experience and knowledge and interacting with three people that you talk to every day talking about a project that is well familiar to you.
The backstory is that she illegally entered from mexico but CBP is in the US, Mexico won't take her back so ICE has to arrange deportation. Unsurprisingly there are no return flights from bumfuck border town to Germany so she is detained (and interrogated, her Instagram shows her giving tattoos to people in mexico) and sent to San Diego and detained until ICE arranges a flight. ICE prioritizes mass deportations so a single person likely gets put at the back of the queue.
A sane border would just block illegal entrance. But pretending that ICE should be optimized for single person expedited deportation is just stupid. While in CBP you may not be allowed to contact your lawyer the 60 days she was in ice custody was completely fair game, but she didn't for some unknown reason.
Sounds like due process was exactly what she got. Are you suggesting everybody gets a trial before they are deported? Or that the US has to allow you to enter if you do so legally, determined by trial? Either will result in much, much longer deportation times.
Usually not a the border, no, because there it's an administrative thing, but then you are sent back to your originating country.
However, if arrested within the borders, yes, it becomes a different matter where you _are_ going through a judicial process, to assess whatever should happen, and why exactly (and to document the process).
What reason is there to think that there is any more backstory? More transparency might yield new information, but it might not. The situation might simply be exactly as it appears.
I should look into this because I have over 132000 words of text in my notes (more than the average novel) and I'm curious whether I can 'talk' to my second brain via a LLM.
Smart Connections[0] plug-in for Obsidian is worth checking out.
It does a really good job of indexing (with local or OpenAI embeddings) and RAG allowing you to chat with various models about your notes. The chunking and context algorithms it uses seem to be well designed and find most/all relevant details for most things I try to discuss.
It's well implemented and provides useful and interesting discussions with my journal/notes.
You could first ask a LLM to compress your notes. There was some informal research into this a while back, LLMs have the ability to translate text into a much shorter representation that only they can understand. That might allow you to get around the context size limits.
More practically (or additionally) you could just ask it to summarize them or extract the most relevant parts.
Alternatively, I think the most popular approach is to use a RAG thing though someone else will have to fill you in on the current state of the art.
dingllm is very straightforward, you submit your entire selection/buffer and it streams out to the current position. mine is a bit more complex with configurations, context management and so on.
The thing I always tell people is just roll your own. The docs are there, the LLM is there, use them. At the end of the day it's just an http call against text from your buffer to put text in a buffer.
no?