Hacker News new | past | comments | ask | show | jobs | submit | hermitdev's comments login

> Meanwhile there is another reason why the number of government workers has gone down:

Uh...excluding the very recent cuts this year under Trump; the number of civilians in the US Federal work force has gone up fairly steadily. [0]

We had 23.592 million civilian employees in Jan 2025. 21.779M in Jan 2021, after being largely stagnant overall the previous 10 years. That's a net change in excess of 1.8M employees under Biden.

I do find it interesting that it appears that employee count was flat, or even down under Obama, but until COVID, there was a steady increase under Trump v1.

[0] https://fred.stlouisfed.org/series/USGOVT


> the number of civilians in the US Federal work force has gone up fairly steadily.

The graph you provided is not Federal government, it’s all US government which includes state & city, and other types of government employees. It should be expected this grows with population size, and to get a sense of whether it’s really shrinking or growing, you should divide by population. But in any case, this chart doesn’t backup your claim that Federal government is growing.

The link to the Federal government was just underneath that graph: https://fred.stlouisfed.org/series/CES9091000001. US Federal government absolute size peaked in 1991 and has gone down slightly since then. If you divide this one by population, the decline would be a bit stronger and more obvious. The ~10 year spikes are census workers. Notice we can see the peak in 1991 with or without the census spikes.


And beside government institutions there are lots of institutions at state and city level.

Institutions at the state and city level are called “government”, and those are included in the data parent linked to.

> because the simulator had a bug

I had something similar happen when I was taking microcomputers (a HW/SW codesign class at my school). We had hand-built (as in everything was wire wrapped) 68k computers we were using and could only download our code over a 1200-baud serial line. Needless to say, it was slow as hell, even for the day (early 2000s). So, we used a 68k emulator to do most of our development work and testing.

Late one night (it was seriously like 1 or 2 am), our prof happened by the lab as we were working and asked to see how it was going. I was project lead and had been keeping him apprised and was confident we were almost complete. After waiting the 20 minutes to download our code (it was seriously only a couple dozen kb of code), it immediately failed, yet we could show it worked on the simulator. We single-stepped through the code (the only "debugger" we had available was a toggle switch for the clock and an LED hex readout of the 16-bit data bus). I had spent enough time staring at the bus over the course of the semester that I'd gotten quite good at decoding the instructions in my head. I immediately saw that we were doing a word-compare (16-bit) instead of a long-compare (32-bit) on an address. The simulator treated all address compares are 32-bit, regardless of the actual instruction. The real hardware, of course, did not. It was a simple fix. Literally one-bit. Did it in-memory on the computer instead of going through the 20-minute download again. Everything magically worked. Professor was impressed, too.


> Which is redundant for most functions as they only have positional parameters.

Huh? This is not true.

    def foo(a, b, c): ...
This can be invoked as either `foo(1, 2, 3)` or `foo(c=3, b=2, a=1)`:

    >>> def foo(a, b, c):
    ...     print(f"{a=}")
    ...     print(f"{b=}")
    ...     print(f"{c=}")
    ...
    >>> foo(1, 2, 3)
    a=1
    b=2
    c=3
    >>> foo(c=3, b=2, a=1)
    a=1
    b=2
    c=3
    >>>


    Help on built-in function sin in module math:

    sin(x, /)
        Return the sine of x (measured in radians).

   
   >>> math.sin(x=2)
    ~~~~~~~~^^^^^
      TypeError: math.sin() takes no keyword arguments
/ is used everwhere and it's usually just noise. Unexplained noise.


It is often used for builtins, because emulating the default Python behaviour of accepting arguments both by position and by name is a pain with the Python/C API. (There are other use cases for positional-only arguments, such as accepting an arbitrary function and an arbitrary set of arguments to call it with at the same time—for example, to invoke it in a new coroutine—but they are pretty rare.) This pecularity of most builtin functions has been there since before Python 3 was a thing, it’s just been undocumented and difficult to emulate in Python before this syntax was introduced.

As for unexplained noise—well, all other parts of the function declaration syntax aren’t explained either. You’re expected to know the function declaration syntax in order to read help on individual function declarations; that’s what the syntax reference is for.


How would you discover the syntax reference via the repl help() system?


Found it.

>>> help('def')


A thought: maybe the pulse could be some sort of link status probing? e.g. "is this thing plugged in?"


Caveat: I have not looked at the neither the API nor the implementation of Kreuzberg, this is purely from personal work.

Even with CPU bound code in Python, there are valid reasons to be using async code. Recognizing that the code is CPU bound, it is possible to use thread and/or process pools to achieve a certain level of parallelism in Python. Threading won't buy you much in Python, until 3.13t, due to the GIL. Even with 3.12+ (with the GIL enabled), it's possible (but not trivial) to use threading with sub interpreters (that have their own, separate GIL). See PEP 734 [0].

I'm currently investigating the use of sub interpreters on a project at work where I'm now CPU bound. I already use multiprocessing & async elsewhere, but I am curious if PEP 734 is easier/faster/slower or even feasible for me. I haven't gotten as far as to actually run any code to compare (I need to refactor my code a bit with the idea of splitting the work up a bit differently to account for being CPU instead of just IO bound).

[0] https://peps.python.org/pep-0734/


Will it lock the GIL if you use thread executor with asyncio for a native c / ffi extension? If that’s the case, that would also add to benefits of asyncio.


It's not that they don't understand; it's that it causes more work for the lawyers, because someone has to review the license if it's not one of the standard boilerplate acceptable licenses.

I once had to go to lawyers to get a license approved on a lib that went something along the lines of "this work is in the public domain; do whatever the fuck you want with it, just don't come crawling to me for help". I'm paraphrasing from ~20-year-old memories here, but I do distinctly remember the profanity. It elicited a chuckle from the lawyer and something along the lines of "I wish all of these were this simple".


JSLint had a standard open source license with an addition that said “The Software should be used for Good, not Evil.”

IBM wanted to use it but their lawyers balked at the added restriction. They wrote to the author to see if it could be removed. The author wrote back with, “I give permission for IBM, its customers, partners, and minions, to use JSLint for evil.”


If you're doing this sort of license analysis, try a license detector I built. It tries to guess which license was used and diff any changes. Lots of licenses are small changes from others, so it usually makes multiple guesses.

https://alexsci.com/which-license/


Sounds like https://en.wikipedia.org/wiki/WTFPL

"DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE"


Total stab in the dark here, but it could be the local time on the device. Your local clock needs to be relatively close to the actual clock. I've run into this more than a few times with an old Surface Tablet I seldom use. Powered off/battery dead, clock gets out of sync. Power on, cannot get online because everything is TLS/SSL now, even clock sync. Cannot even sync the clock, because of certificate issues. Manually set the time to approximately correct time has resolved my issues with long powered off devices. That is, assuming of course, the _ability_ to set the time on the device.


> but it could be the local time on the device. Your local clock needs to be relatively close to the actual clock.

Yes, this is one of those nasty hidden costs that the "just use TLS/SSL for everything, it's easy!" people don't seem to recognize - introducing certificates to the mix suddenly makes your application coupled to wall clock time being in sync with the rest of the world. That is a big step in complexity right there - as everyone who ever had a clock drift couple minutes off the rest of the world, and saw half of the Internet stop working for them.

(And don't get me started on getaddrinfo(), another step function in complexity, hard-coupling even most trivial software to a heap of things that isn't relevant to it at all; or how it all interacts with SSL.)


On desktop, the embedded video is also fairly small and annoyingly, they disallow full-screen. why?


FWIW, you have some control over how this inference is done. Search your settings in vscode for `@ext:ms-python.vscode-pylance strict`.


> That `String` leaks memory, [...]

So does the clone...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: