I haven't done any Go in my life, but this bit about immutability interests me. Is there a general movement in the Go community towards immutable data structures?
I don't think there is a movement towards immutable data structures. The const keyword only works on basic types (bool, num, str). Objects can have private member variables, but those are only private on the module level. You can also pass by copy instead of passing a pointer.
This is something I'm hoping will change with the introduction of Generics. Right now using any custom data structures like immutable map or lists are very cumbersome requiring either code generation or runtime type coercion via `interface{}`.
Go espouses "Share by communicating, don't communicate by sharing" i.e. don't let goroutines communicate by mutating shared data, but then doesn't provide any effective immutable data structures to make this easy.
Being able to safely send pointers to immutable maps over channels would make go very nice to work with. Although I'll never use Go outside of work until they remove nullable pointers which seems unlikely.
Channels. Share information between routines with channels. If you're passing pointers to mutate down a channel, then that's on you. Just set yourself on fire, much easier that way lol.
You don't need immutable structures to communicate between goroutines. Value types are fine. Just think a bit about how to use channels as signal carriers, it eventually starts to make sense
I think I miscommunicated what I meant, there was an implicit assumption that copying value types over channels was bad due to GC overhead. Efficient Immutable data structures let you have your cake and eat it too. You can avoid GC overhead by sharing structure but avoid problems with mutation between goroutines.
Yes, after some discussion and minor modifications (and removal of a few features): https://github.com/golang/go/discussions/47323 ... I understand the motivation for this new package, but it does seem like a large API surface to have imported into the stdlib.
No it's not. Someone who is a former core maintainer likely has better perspective on what belongs or doesn't than 99% of the community. There's a reason most stdlib and language decisions in Go are run through an extremely small cabal of people.
I think what OP is trying to say is that the merit of a given feature alone should determine whether it goes into the stdlib or not and author's reputation should not result in something being merged that wouldn't otherwise.
Of course it should have merit, and it does. I invoked the author's reputation in that he previously had both contributed and determined merit -- his opinion carries far more weight than any random gopher.
The investigation to justify the introduction of the Cut function was an interesting read. Really solid numbers on those data to show that the function, while redundant with other functionality provided, is an extremely common operation and worth special casing.
I enjoyed reading that too. It's great to see the amount of thought that the language maintainers put into design, especially for things like syntactic sugar (which can get overwhelming in other languages).
I like Go, but I'm finding array operations leave a lot to be desired. It feels weird to use global functions (like append) to perform operations on array, and I miss having easy map, reduce and filter functions that are accessible from within the array itself. I guess things are getting better, but idk.
Agreed. Doing Advent of Code in Go and it is so verbose to do simple things like filtering. I agree that for loops can be easier to read, but when it takes 4 LOC to express what a 1 LOC filter function can do and you have 3+ maps or filters you need to do it seems far less readable because it is so noisy. Sure you can understand it, but it is a mess and takes too long to parse for the importance it actually has.
Oh, I understand what you mean now. I thought you meant the functionality, not the idiom of calling it as a method. I dare say that it didn't occur to me that this difference would be great enough for someone to complain about it on the internet. :P
I find map/filter/reduce much easier to read if it is being used for a functional map, filter, or reduce operation or something not too distant from it.
> A straight for loop is infinitely better in every way.
A straight for, for-of, or for-in loop, as appropriate, is finitely better for imperative operations that don't naturally fit map/filter/reduce but need to be performed over some set of values.
Disagree. I used Python heavily before Go, and Go is much, much easier to read. I've never seen Go code in any codebase I couldn't immediately follow. I can't say the same about Python, and that's supposedly an easier HLL.
Then again, I learned BASIC first technically, so my brain probably isn't quite right :).
I learned C first but then I rode the functional bandwagon (and dynamic languages) and spent many many years in Python and Javascript. I thought map/filter was much better than imperative for a long time.
I rediscovered imperative programming later and found it to be such a breath of fresh air.
Are you actually reading peoples comments before replying? You’re posting these completely irrelevant tangents as if it’s a contradiction to earlier posts (it isn’t) and in a tone that suggests you’re the only person who knows C (you’re not).
Mainly, IME, because of the combination of JS map, etc., passing extra, infrequently used parameters, and functions you might want to use often accepting optional less-frequently-used parameters, which can end up interacting horribly wrong when you do:
I think its important to understand that big parts of the software community over-rely on this type of programming. In my opinion, over use of higher order functions is a cancer on programming languages, and should be used at the absolute minimum, or even omitted from a language entirely.
Yes, I use this much more often than the more powerful general split feature, it feels very "right-sized".
split_once() and similar split features in Rust are interesting because Rust doesn't have overloading, yet you can split on a string, or a character. This relies on a Trait, Pattern, that is Nightly, so you can't Implement it yourself in stable Rust today, but eventually this factors out the commonality which is cleverer than overloading (because it applies everywhere automatically)
The point is you can't implement Pattern, because it's a nightly feature. If you just make your own, that isn't Pattern and provides none of the benefits.
Yes, you can implement your own trait, and provide blanket implementations for it, but unlike Pattern yours is not part of the standard.
The nice thing about Pattern is that if Rust added say split_exactly_six_times() to the standard library that would take separator: Pattern too and so things which implement Pattern qualify, I can't see a way to get that benefit for my own traits.
would be neat to imagine a language where this was an optimizer feature rather than a decision the programmer made. many functions that return an array are ultimately using just one, or a few elements of it. the compiler can make a lot of decisions in the caller about how to reduce allocation or short-circuit array processing.
I think, but am not sure, this is what the experimental parsing language 'wuffs' is about https://github.com/google/wuffs -- the language itself is aware of array lengths as a first-class citizen and can make decisions accordingly
WUFFS is, as it's name suggests, primarily about safety, but it gets performance advantages that fall out of safety (e.g it needs no runtime bounds or overflow/underflow checks because it proved these things can't happen at compile time)
I’ve been copy-pasting or re-writing a function much like strings.Cut, probably dozens of times, in different Go packages. It’s nice to see this function make it into the standard library!
I used to do that. I had a "goutils" package or something like that.
From use, I concluded that the utilities package contained only two things: functionality that was substantial enough to clean up and publish as its own package on GitHub, and minor functionality that was better off being copy/pasted between projects. So I abandoned my utility package and published a few minor packages on GitHub.
For those that have switched to Go from nodejs to serve json for a web app, how did it go? (excluding the possibility of needing something like react ssr which can be a big one for many apps).
> excluding the possibility of needing something like react ssr
No need to exclude that possibility. I once worked on a Go app that rendered React (in Typescript) server-side. The Javascript could call Go API functions directly when rendering server-side and those calls would turn into gRPC-web calls after it was loaded in the client. It worked really well.
> would turn into gRPC-web calls after it was loaded in the client
Isn't the point of SSR to avoid needing to have the client make additional requests for the first render? If I return something like <div dangerouslySetInnerHTML={fetchWithGrpc(myGoFunc)} /> that's not SSR. Perhaps I'm missing something.
> Isn't the point of SSR to avoid needing to have the client make additional requests for the first render?
Yes, hence why it was able to call the Go functions directly during SSR, the results of which were bundled with the payload delivered to the client. gRPC played no role during initial render.
If the React app needed more information/updates as the user used the app then the function calls would transparently happen over gRPC instead.
Consider:
const MyReactComponent = () => {
const [things, setThings] = useState()
useEffect(() => {
// In-memory call to Go GetThings function during SSR render; gRPC call
// to Go GetThings function when running in the client.
GetThings().then(setThings)
}, [])
return <div>{things}</div>
}
Architecturally, not a whole lot different to how you might build a SSR React app on Node. The backend was just written in Go instead and that backend had a built-in Javascript runtime to execute the frontend code for SSR purposes.
The primary motivation curiousity to learn if this becomes "the [best/default] way" folks reach for when leveraging BuildInfo to implement binary versioning in Golang.
It could be a nice benefit to the entire go ecosystem if there becomes a widely-used, de-facto, and consistent automatic versioning scheme (for the common cases, e.g. minor point release lineage).
If the build time is being included in the binary by default, I guess that means builds are not reproducible by default? Is there any easy way to exclude or strip the build time to get a reproducible binary?
> "reproducible by default" if the system is setup a certain way
What do you mean? Under what circumstances is a Go build not reproducible? From what I’ve observed, if you build your binary with a set of input, the same binary is produced. Is it sometimes not true?
But you literally said "build stamps do not change this", and it absolutely does.
Of course as I mentioned it does not look like go is adding a build time to the binary.
3. It’s in the pending release notes with all the other new features and further information in the post you’re commenting on. https://tip.golang.org/doc/go1.18
Try to be more constructive if you want a conversation please.
1. Wrong. There are cases where reproducible builds are valuable. There are other times when you don’t care. This allows you to choose for what fits your situation.
2. Okay, what about projects that don’t fit your criteria? This is useful and requested.
You seem to have a different philosophy than other people but seem to be unable to consider those other philosophies. It’s your way or the highway.
Not everyone wants an exciting language. Some people want a stable and useful language. Go is intentionally not exciting. They add features based on user feedback and real world use, at a slow and careful pace.
Also consider you might get downvotes not because it’s about go, but because you’re kinda rude. You just spouted negativity with no explanation and called tons of people Sons of Bitches. Could be that?
For your criticism regarding reproducible builds, I'm not sure it's entirely correct. The impression I got from a skim was that version control metadata was being embedded, such as the git commit hash or maybe a tag. Both of which are static within the constraints that builds are generally reproducible across. However, if it includes timestamps or some generated metadata, then I wholeheartedly agree with your concerns.
There is also a great post(See: https://tailscale.com/blog/netaddr-new-ip-type-for-go/) in Tailscale's blog which deep dives into why we needed a new IP type/library in Go.