Im a go noobie so I had a question about the following.
> Go's lack of support for immutable data — the only way to prevent something from being mutated is to only hand out copies of it, and to be very careful to not mutate it in the code that actually has access to the inner bits.
I thought everything was handed out as copies in go by default unless a pointer was being passed. So this would make it “easy” to tell whether you are mutating a object or not.
would appreciate if anyone can clear that up for me.
side notes:
since Im still a junior, not too interested in language wars and feel like everything i touch i learn something from no matter how old/new it is.
honestly, i would never have the confidence to critic this <language> in that fashion. i cant convince myself that i know enough about language design/systems to critic anything that harshly.
They mention it elsewhere in the post, but there are a few types that are essentially pointers but don't look like pointers. Slices, maps, interfaces, and channels (and also functions, but that isn't that important here). So you cannot just think that anything that doesn't have an asterisk is a copy.
Besides that, there is also the fact that slices share underlying array storage, and `append` doesn't make a copy unless there is still capacity, which means that you should be careful when working with subslices that get appended to.
I see! Thank you for the example that was helpful.
I read through the example regarding the struct containing the map. I recently ran into that when working on something.
slightly meta:
So how can one evaluate whether that is a good decision or bad?
Does forcing the programmer to use make explicitly for maps help prevent errors or is it tedious and better off having the zero value of map not cause errors?
I actually agree with the author that nils are a massive pain in the arse most of the time, and that Go would be much better off without them.
There is no precise, mathematical answer to your question, since it's always contingent on many things, but one way to evaluate a “correctness” of a design choice is the old adage that a good interface is hard to misuse. If your code can handle a nil map okay, and not requiring a non-nil map brings significant ergonomics advantages, you may not need to check it. Otherwise, I personally would err on the side of caution and design an API that either doesn't give the user a choice (i.e. creates the map itself) or returns an error if they provide invalid data.
> I thought everything was handed out as copies in go by default unless a pointer was being passed. So this would make it “easy” to tell whether you are mutating a object or not.
The first bit is correct.
The point the article makes is that, to know whether a variable is potentially mutated, you have to go look at the signature of any function called on it. E.G. you can't just look at main and "know" whether a will get mutated or not, you have to also look at the signature of `Changed`. In C, with the following code
#include <stdio.h>
struct test {
int value;
}
int main() {
struct test a = test { value: 1 };
Change(a);
printf("a.Value = %d\n", a.Value);
}
I can be 100% sure that a is never mutated, because it is passed to change value, and won't get automatically turned into a reference. Had Change be called with `&a`, then I'd know that a potentially gets mutated.
In Rust, `a` would have to be declared mutable to start with, e.g.
fn main() {
let mut a = A { value: 1 };
a.Change();
println!("{:?}", a);
}
The above, I know that Change can potentially mutate a. And if a had been declared `let a`, I can be 100% sure that change cannot mutate it.
The direction go took is in line with many other languages (I think C++ behaves this way, it automatically turns values into references based on function signature).
i feel like that C example isnt fair cause you cant have functions bound to types right??
youre not calling change(a) but instead calling a.change().
i havent learned any rust yet, im waiting till im a little better at go so i can be comfortable at work before doing so:)
whats the advantage of creating a function of type struct a rather than creating the C equivalent of change(obj structAtype) ? to me now, it seems that it should be expected that an a.Change func will mutate A
i see what you’re saying that with Go it cant always be easy to check if its mutating what you’re calling or not. its giving the developer more options to use stuff by ref/val which gives more power but more room for mistakes.
> it seems that it should be expected that an a.Change func will mutate A
But you can make functions that have copied receivers:
type Foo struct {
bar int
}
func (f Foo) mutate() {
f.bar = 2
}
func main() {
f := Foo{bar: 1}
f.mutate()
fmt.Println(f.bar) // 1
}
> whats the advantage of creating a function of type struct a rather than creating the C equivalent of change(obj structAtype)
types need to have methods in order to fulfill interfaces, that's all. (Well, and all the downstream benefits of using interfaces, such a polymorphism).
The problem is that you can still mutate a copy. But does the person mutating the copy realize it's a copy, and that it won't propagate?
It's also hard to keep a mental model of which builtin primitives are values (copy semantics) and which are references (reference semantics).
Arrays are values. Slices are references, but might be a reference to some dynamically allocated slice or an array. Appending to a slice makes a copy _of the slice_, but which might modify the array it came from. It's not possible to tell from the type if it references an array or not.
Maps have reference semantics too. Why is it that maps and slices are the only references that don't have a pointer star to denote that?
Having followed language wars for a while as a relative newcomer to programming (longtime lab rat type), my present conclusion is that languages are just tools, comparable in my experience to different laboratory techniques and instrumentation. Different tools are appropriate for different purposes. Some are a lot less convenient than others to set up and use. Some are so obscure that nobody really supports them anymore, and if you want to use them you'll have little help with troubleshooting. Some are popular fads that come and go, as people realize there are better options available for particular jobs.
Once you learn how to use one set of tooling well, it's not so hard to jump over to another one if you get a job in a lab that relies heavily on it. Same with languages.
Fanatical adherents of one option or another often have some ulterior motive ... like the lab who spent $2 million on that high-field NMR spectroscopy machine and is always trying to get people to use it so they can can get some co-author publications. Often the most vitriolic fights are over some technical detail or other that doesn't apply to the vast majority of use cases.
Conclusion: meh. Of course Python and C++ are the superior options, however. Plus C for things like writing low-level code for firmware etc. The fact that these are the only languages I have any experience with is entirely incidental to the truth of this claim.
I definitely see that languages are just tools and you pick the right one for the job.
I guess the confusion for me is, languages often have overlapping usecases with slightly different semantics/syntax.
How do we objectively evaluate what is better? is it industry adoption? academic praise? im sure theres an answer to that but i havent went down that rabbit hole yet
> Go's lack of support for immutable data — the only way to prevent something from being mutated is to only hand out copies of it, and to be very careful to not mutate it in the code that actually has access to the inner bits.
I thought everything was handed out as copies in go by default unless a pointer was being passed. So this would make it “easy” to tell whether you are mutating a object or not.
would appreciate if anyone can clear that up for me.
side notes:
since Im still a junior, not too interested in language wars and feel like everything i touch i learn something from no matter how old/new it is.
honestly, i would never have the confidence to critic this <language> in that fashion. i cant convince myself that i know enough about language design/systems to critic anything that harshly.
maybe its my shortcoming as a junior :)