+1 to this. Especially with go, most of the (non-integration) tests I end up writing are behaviour driven tests. I'm not really sure how one can write these sort of tests without interfaces and mocks.
One could use build tags to provide alternative implementations over a concrete type, I suppose.
But like with everything in life, there is no right solution. Just different tradeoffs. If interfaces best align with the trades you are willing to make, go for it.
Could you elaborate a little bit more? These days i write a lot of go code and i end up using mock objects for different interfaces to write unit tests. Is there another way to do this?
I have explained it many times. [0] Fundamentally, GPT-4, ChatGPT, other LLMs are all in the same family of black-box deep neural networks which after decades since their invention, they still cannot reason or explain their own decisions and outputs and can only spit out what it has been trained on.
Researchers have only trained these LLMs on more data and have even less understanding of what these LLMs do internally since their architectures are with in a massive black-box with unexplainable numbers and operations going on.
That isn't helpful to researchers or even serious professionals in high risk industries. It makes LLMs less trustworthy for them are is incredibly unsuitable for their use-case in general.
This may have been true elsewhere, but I don't think this holds for GPT4.
I suspect that complex intelligence, that cannot be directly attributed to structure of the underlying LLM, has emerged. I am guessing it has to do with the use of language itself and at a sufficient enough size, this property exist in both humans and models.
A lot of the experimentation I've done is too long and complex to fit nicely in an Ask HN post. People have the tendency to move the bar when assigning intelligence to AI. GPT4 is different. Here is a post from earlier today that might be more convincing.
GPT-4 is no different to any old deep neural network and fundamentally, they are black-boxes and have no capability of reasoning. What we are seeing in GPT-4 is regurgitating text it has been trained on.
Not even the researchers who created it can get it to transparently explain it decisions.
The other party in any transaction knows enough to "deliver" whatever you purchased. In the world where all your transactions happen on a blockchain, that's everyone from Amazon to the corner pizza shop.
Do we have evidence that the virus has mutated into something more deadly? Such mutation is clearly not beneficial to the virus. My guess would be that the virus mutated into something with a higher R-0.
Read more about the optimal virulence here: https://en.wikipedia.org/wiki/Optimal_virulence
Optimal virulence is kinda bullshit with rapidly evolving things on small timescales...
Sure, if a mutation that makes it give deadly miocarditis 50% of the cases pops up, that strain would have a clear evolutionary disadvantage ...but if a strain that gives 2x worse pneumonia (and maybe a 2x death rate, that overall is still low) with 4x more coughing evolve, it will have an evolutionary advantage, more caughing => more virus in the air etc.
So you absolutely can have viruses evolving towards increased lethality, as long as people infected with the new strain spread the virus more and don't die too soon... you won't get to smth. evolving its way to 80% lethality, but it can very well evolve from <1% to ...50% (scary!).
The whole optimism about patterns of viral evolution is 100% wrong, unless you're thinking at very large timescales, and in environments where humans have not added so many accelerator factors!
I learned Go as my first statically typed language(Worked on ruby and some python before that). Although I missed the functional goodies from Ruby in Go, I thought that was the price I had to pay for performance and static checking guarantees.. Nevertheless, I enjoyed writing Go and its take on writing concurrent programs. As others have called it, I thought of Go as "modern C with garbage collection and first class concurrency primitives".
All this changed when I started learning Rust, man was I blown away by the language.. I never thought a statically typed language with manual memory management can feel so "scripting language" like.(Sure the borrow checker annoys you sometimes, nothing a `.clone()` can't fix when you are starting out). I truly enjoy writing Rust these days and never have I missed anything major from Ruby.
I'll be damned if I sit here and criticize Ken Thompson and Rob Pike, but I feel like in their quest to make Go "simple" they've perhaps held a little too hard on to their past.
tldr; Go feels a little too verbose and "dated" after learning rust.
I had this exact same progression, but with Python instead of Ruby. Go was a great intro to statically typed langurs though, I’d take go over JVM any day.
Currently love Rust, but Crystal, Nim, and Swift are all rather distracting.
Once Go gets generics I’ll be interested in giving it a second look.
K8s tries to abstract away individual "servers" and gives you an API to interact with all the compute/storage in the cluster.