With which model are you getting 100k responses? The models are limited and are not capable of responding that much (4k max). The point I am trying to make is written 3 times in the previous messages I wrote. GPT4 is extremely slow to be useful with API.
As expected, you do not know anything about its API limits. Maximum token is 4096 with any gpt4 model. I am getting tired of HN users bs'ing at any given opportunity.
1. Your original wording, "getting a response _for_ n tokens", does not parse as "getting a response containing n tokens" to me.
2. Clearly, _you_ don't know the API, as you can get output up to the total context length of any of the GPT-4 32k models. I've received output up to 16k tokens from gpt-4-32k-0613.
3. I am currently violating my own principle of avoiding correcting stupid people on the Internet, which is a Sisyphean task. At least make the best of what I am communicating to you here.
You bullsh*t saying "I dunno, I get a response back for 100k tokens regularly." A model that doesn't even exist, then you talk about a 32k non-public API. Stop lying. It is just the internet, you don't need to lie to people. Get a life.
Because if the client specifically requests GPT-3.5, but is silently being served something else instead, the client will rely on having GPT-3.5 capabilities without them actually being available, which is a recipe for breakage.
"Insecure mode" sounds a lot better than "default mode". If I didn't know what any of the options meant, I'd feel safe using BlockCipherMode.Default, but I wouldn't feel safe using BlockCipherMode.Insecure.
You are just describing a (good) recommendation algorithm. TikTok's is infamously good at figuring out your niches and catering to your taste by looking at your minute interactions with the content it shows you. My TikTok "for you" page has absolutely 0 mainstream politics, rage bait, or any other "normie" topics. It's mostly technically fascinating stuff and good absurd humor that caters to my absurd taste.
Optimizing for engagement is not inherently bad, nor does it necessarily result in socially suboptimal outcomes. My TikTok feed is very engaging without having to resort to triggering my anger.
A recommendation algorithm that only sticks to a handful of given topics (rage bait and furry porn?) is not a very good one.
> I'm not very interested in this "right to repair" stuff - it revolves around demanding modular parts for quick and easy replacement. People who are actually close to the metal, who actually get their hands dirty are repairing those devices since ever.
It also involves demanding access to proprietary ICs and information like schematics. A component level repair might become impossible if you don't have access to a vendor-specific replacement for some burnt battery charging IC. You can't really fix up a silicon die like you can a dead pixel.
Also, if application developers got to choose all of them would build their apps to request the most performance. And then how do we put anything on an efficiency core if every application claims to need to run on a P core. Back to square one it is.
A lot of certificate management services for enterprise customers "helpfully" store the private key files. How many cloud or SaaS vendors automatically handle the private keys as well instead of them being generated and staying securely only on the systems using them? So there are still points of centralization to attack, potentially.
Yes, state actors have been known to steal things like codesigning keys. Microsoft had that happen recently where someone with persistence on a dev machine sniffed them out of crash logs(!).
For the future, please remove the ?si= parameter from your YouTube share links. It uniquely identifies you and allows Google to track your social circle with certainty. (Not that they couldn't do it already)