Hacker News new | past | comments | ask | show | jobs | submit login
Three Minor Features in Go 1.18 (carlmjohnson.net)
224 points by throwaway894345 on Dec 22, 2021 | hide | past | favorite | 90 comments



Go 1.18 also includes the new net/netip package which is huge, now the Addr type holds less memory, it is comparable and immutable.

There is also a great post(See: https://tailscale.com/blog/netaddr-new-ip-type-for-go/) in Tailscale's blog which deep dives into why we needed a new IP type/library in Go.


I'm far more excited by Addr / netip then generics :)


I haven't done any Go in my life, but this bit about immutability interests me. Is there a general movement in the Go community towards immutable data structures?


I don't think there is a movement towards immutable data structures. The const keyword only works on basic types (bool, num, str). Objects can have private member variables, but those are only private on the module level. You can also pass by copy instead of passing a pointer.


This is something I'm hoping will change with the introduction of Generics. Right now using any custom data structures like immutable map or lists are very cumbersome requiring either code generation or runtime type coercion via `interface{}`.

Go espouses "Share by communicating, don't communicate by sharing" i.e. don't let goroutines communicate by mutating shared data, but then doesn't provide any effective immutable data structures to make this easy.

Being able to safely send pointers to immutable maps over channels would make go very nice to work with. Although I'll never use Go outside of work until they remove nullable pointers which seems unlikely.


Channels. Share information between routines with channels. If you're passing pointers to mutate down a channel, then that's on you. Just set yourself on fire, much easier that way lol.

You don't need immutable structures to communicate between goroutines. Value types are fine. Just think a bit about how to use channels as signal carriers, it eventually starts to make sense


I think I miscommunicated what I meant, there was an implicit assumption that copying value types over channels was bad due to GC overhead. Efficient Immutable data structures let you have your cake and eat it too. You can avoid GC overhead by sharing structure but avoid problems with mutation between goroutines.


Also see past HN thread about it:

https://news.ycombinator.com/item?id=26416553


Is this Tailscale’s library imported into Go’s stdlib?


Yes, after some discussion and minor modifications (and removal of a few features): https://github.com/golang/go/discussions/47323 ... I understand the motivation for this new package, but it does seem like a large API surface to have imported into the stdlib.


Being that this is coming from a (former) core maintainer of the stdlib I think it's worthy.


Who it came from is irrelevant


No it's not. Someone who is a former core maintainer likely has better perspective on what belongs or doesn't than 99% of the community. There's a reason most stdlib and language decisions in Go are run through an extremely small cabal of people.


I think what OP is trying to say is that the merit of a given feature alone should determine whether it goes into the stdlib or not and author's reputation should not result in something being merged that wouldn't otherwise.


Of course it should have merit, and it does. I invoked the author's reputation in that he previously had both contributed and determined merit -- his opinion carries far more weight than any random gopher.


Well, your sentence contains the word should two times. To me, this indicates a wish that differs from reality.


Isn't that one of Go's main value propositions? A batteries-and-kitchen-sink-included stdlib?


Kitchen sink seems too far. Batteries is part of it.


Yep


Hadn't seen this but yeah, a much needed improvement!


where does one find a comprehensive list of new things for each version? the release notes never seem to know everything that people in here know.


I normally look at the go blog. Their post about 1.18 linked the detailed release notes.

https://tip.golang.org/doc/go1.18


I like the addition of the GOAMD64 environment variable. It allows you to use the capabilities of your CPU.

V2 - Nehalem/Jaguar

V3 - Haswell/Excavator

V4 - AVX512


Agree, this is a good addition.


The release notes usually contain everything.


The investigation to justify the introduction of the Cut function was an interesting read. Really solid numbers on those data to show that the function, while redundant with other functionality provided, is an extremely common operation and worth special casing.


I enjoyed reading that too. It's great to see the amount of thought that the language maintainers put into design, especially for things like syntactic sugar (which can get overwhelming in other languages).


I like Go, but I'm finding array operations leave a lot to be desired. It feels weird to use global functions (like append) to perform operations on array, and I miss having easy map, reduce and filter functions that are accessible from within the array itself. I guess things are getting better, but idk.


Agreed. Doing Advent of Code in Go and it is so verbose to do simple things like filtering. I agree that for loops can be easier to read, but when it takes 4 LOC to express what a 1 LOC filter function can do and you have 3+ maps or filters you need to do it seems far less readable because it is so noisy. Sure you can understand it, but it is a mess and takes too long to parse for the importance it actually has.


This shortcoming of golang really shows when writing non trivial programs.


This is really annoying, every time i code in go i miss python's list.append method


That is weird because append is one of the five associated helper functions on slices that's actually in the Go specification.

https://go.dev/ref/spec#Appending_and_copying_slices


Yes, it's a helper function, not a list method like it is in python. The result is the same, its just weird


Oh, I understand what you mean now. I thought you meant the functionality, not the idiom of calling it as a method. I dare say that it didn't occur to me that this difference would be great enough for someone to complain about it on the internet. :P


Python also puts what should be methods as global functions, e.g. len, filter, reduce.


I miss having easy map, reduce and filter functions that are accessible from within the array itself

Those are be the sort of things that generics will enable.


Whenever I read "modern" javascript code, I'm irritated by the prolifiration of map/filter/reduce.

It makes code harder to read and modify.

A straight for loop is infinitely better in every way.


I find map/filter/reduce much easier to read if it is being used for a functional map, filter, or reduce operation or something not too distant from it.

> A straight for loop is infinitely better in every way.

A straight for, for-of, or for-in loop, as appropriate, is finitely better for imperative operations that don't naturally fit map/filter/reduce but need to be performed over some set of values.


This. Saw a reduce once which contained a lambda one screen long. It gets worse, it contained another reduce...

Ah, and a map that performed side effects.


A one line map or filter is ok.

Other than that it's unreadable.


This debate is basically just "I learned C first" vs "I learned a higher level language first."


Disagree. I used Python heavily before Go, and Go is much, much easier to read. I've never seen Go code in any codebase I couldn't immediately follow. I can't say the same about Python, and that's supposedly an easier HLL.

Then again, I learned BASIC first technically, so my brain probably isn't quite right :).


Neither Go nor Python are languages that encourage composing functionality out of higher order functions, so I think the point stands.


My learning path was Logo->Basic->Pascal\Delphi->PHP->Python->Ruby and then Go. With some of js here and there.

I don't really think it has much to do with you language path but rather with the field you are working most of the time.

People who are building Rails startups for sale will have opinion different from people who build some sort of integration systems.


I learned C first but then I rode the functional bandwagon (and dynamic languages) and spent many many years in Python and Javascript. I thought map/filter was much better than imperative for a long time.

I rediscovered imperative programming later and found it to be such a breath of fresh air.


C is high level, many just think it isn't.

DDJ had quite a few articles on how to do Lisp-in-C kind of thing.


You’re a bit quick to comment there because the GP said “highER”. ie another language that is a highER level than C.

So technically their post is correct


C provides the tooling to implement many of the necessary abstractions as libraries, hence why I pointed to the DDJ articles.

You can even plug a conservative GC for the full Lisp like experience.


Are you actually reading peoples comments before replying? You’re posting these completely irrelevant tangents as if it’s a contradiction to earlier posts (it isn’t) and in a tone that suggests you’re the only person who knows C (you’re not).


Whatever.


It's a bit messy in JS.

In Ruby the map/filter/reduce handling is very smooth, readable and useful!


> It's a bit messy in JS.

Mainly, IME, because of the combination of JS map, etc., passing extra, infrequently used parameters, and functions you might want to use often accepting optional less-frequently-used parameters, which can end up interacting horribly wrong when you do:

  arr.map(func)
instead of:

  arr.map(v => func(v))
when you want a “simple” map application.


I think its important to understand that big parts of the software community over-rely on this type of programming. In my opinion, over use of higher order functions is a cancer on programming languages, and should be used at the absolute minimum, or even omitted from a language entirely.

https://github.com/golang/go/issues/45955#issuecomment-83235...


strings.Cut is otherwise known in Rust as str::split_once. It's something I've always missed when writing Go. Nice to see it added to the stdlib


Yes, I use this much more often than the more powerful general split feature, it feels very "right-sized".

split_once() and similar split features in Rust are interesting because Rust doesn't have overloading, yet you can split on a string, or a character. This relies on a Trait, Pattern, that is Nightly, so you can't Implement it yourself in stable Rust today, but eventually this factors out the commonality which is cleverer than overloading (because it applies everywhere automatically)


You can implement it yourself. You just have to use your own trait.


I stared at this a few times and I don't get it.

The point is you can't implement Pattern, because it's a nightly feature. If you just make your own, that isn't Pattern and provides none of the benefits.

Yes, you can implement your own trait, and provide blanket implementations for it, but unlike Pattern yours is not part of the standard.

The nice thing about Pattern is that if Rust added say split_exactly_six_times() to the standard library that would take separator: Pattern too and so things which implement Pattern qualify, I can't see a way to get that benefit for my own traits.


agree -- ultra useful alternative to splitting

would be neat to imagine a language where this was an optimizer feature rather than a decision the programmer made. many functions that return an array are ultimately using just one, or a few elements of it. the compiler can make a lot of decisions in the caller about how to reduce allocation or short-circuit array processing.

I think, but am not sure, this is what the experimental parsing language 'wuffs' is about https://github.com/google/wuffs -- the language itself is aware of array lengths as a first-class citizen and can make decisions accordingly


WUFFS is, as it's name suggests, primarily about safety, but it gets performance advantages that fall out of safety (e.g it needs no runtime bounds or overflow/underflow checks because it proved these things can't happen at compile time)


The optimization could be done by lazy evaluation, like a generator.


Split_first would have been a better name though (and split_last instead of rsplit_once).


I’ve been copy-pasting or re-writing a function much like strings.Cut, probably dozens of times, in different Go packages. It’s nice to see this function make it into the standard library!


Copy-pasting and rewriting is the Go Way.


Hey, I'll take it, over the NPM left-pad way of doing things.


You could have written a package, but I guess copy-paste is more in line with Go's culture.


Using a package is the NPM left-pad way of doing things. This function is like two lines long.


So? Keep it for yourself on an utilities package.

I said nothing of using one package per function.


I used to do that. I had a "goutils" package or something like that.

From use, I concluded that the utilities package contained only two things: functionality that was substantial enough to clean up and publish as its own package on GitHub, and minor functionality that was better off being copy/pasted between projects. So I abandoned my utility package and published a few minor packages on GitHub.


For those that have switched to Go from nodejs to serve json for a web app, how did it go? (excluding the possibility of needing something like react ssr which can be a big one for many apps).


> excluding the possibility of needing something like react ssr

No need to exclude that possibility. I once worked on a Go app that rendered React (in Typescript) server-side. The Javascript could call Go API functions directly when rendering server-side and those calls would turn into gRPC-web calls after it was loaded in the client. It worked really well.


> would turn into gRPC-web calls after it was loaded in the client

Isn't the point of SSR to avoid needing to have the client make additional requests for the first render? If I return something like <div dangerouslySetInnerHTML={fetchWithGrpc(myGoFunc)} /> that's not SSR. Perhaps I'm missing something.


> Isn't the point of SSR to avoid needing to have the client make additional requests for the first render?

Yes, hence why it was able to call the Go functions directly during SSR, the results of which were bundled with the payload delivered to the client. gRPC played no role during initial render.

If the React app needed more information/updates as the user used the app then the function calls would transparently happen over gRPC instead.

Consider:

    const MyReactComponent = () => {
        const [things, setThings] = useState()
        useEffect(() => {
            // In-memory call to Go GetThings function during SSR render; gRPC call
            // to Go GetThings function when running in the client.
            GetThings().then(setThings)
        }, [])
        return <div>{things}</div>
    }
Architecturally, not a whole lot different to how you might build a SSR React app on Node. The backend was just written in Go instead and that backend had a built-in Javascript runtime to execute the frontend code for SSR purposes.


I subscribed to GH activity for the Author's friendliness-enhancing utility library. It's for parsing build information from the debug pkg:

https://github.com/carlmjohnson/versioninfo/

The primary motivation curiousity to learn if this becomes "the [best/default] way" folks reach for when leveraging BuildInfo to implement binary versioning in Golang.

It could be a nice benefit to the entire go ecosystem if there becomes a widely-used, de-facto, and consistent automatic versioning scheme (for the common cases, e.g. minor point release lineage).


If the build time is being included in the binary by default, I guess that means builds are not reproducible by default? Is there any easy way to exclude or strip the build time to get a reproducible binary?


They are already not reproducible. But build time is not included, it's the commit time of the commit the build comes from, from what I gather.


Go builds are, in general, reproducible by default. Build stamps do not change this.


"reproducible by default" if the system is setup a certain way.

Build time injected into builds would absolutely break any reproducibility.


> "reproducible by default" if the system is setup a certain way

What do you mean? Under what circumstances is a Go build not reproducible? From what I’ve observed, if you build your binary with a set of input, the same binary is produced. Is it sometimes not true?


By default you need the same GOPATH and GOROOT.

Compiler version must be identical (not a huge stretch).

cgo reproducibility is not a guarantee.


It would, which is why the Go compiler and linker do not put the build time in binaries.


But you literally said "build stamps do not change this", and it absolutely does. Of course as I mentioned it does not look like go is adding a build time to the binary.


1. Antipattern. 2. Did it inside, I hope. 3. strings.Cut, where is the doc for this? How can this not be Googled? Anyway, easily solved in many ways.


Is this just drive by negativity?

1. Care to elaborate?

2. What does this even mean?

3. It’s in the pending release notes with all the other new features and further information in the post you’re commenting on. https://tip.golang.org/doc/go1.18

Try to be more constructive if you want a conversation please.


[flagged]


1. Wrong. There are cases where reproducible builds are valuable. There are other times when you don’t care. This allows you to choose for what fits your situation.

2. Okay, what about projects that don’t fit your criteria? This is useful and requested.

You seem to have a different philosophy than other people but seem to be unable to consider those other philosophies. It’s your way or the highway.

Not everyone wants an exciting language. Some people want a stable and useful language. Go is intentionally not exciting. They add features based on user feedback and real world use, at a slow and careful pace.

Also consider you might get downvotes not because it’s about go, but because you’re kinda rude. You just spouted negativity with no explanation and called tons of people Sons of Bitches. Could be that?


I think the guarantee of downvotes is for your absolutely awful attitude


For your criticism regarding reproducible builds, I'm not sure it's entirely correct. The impression I got from a skim was that version control metadata was being embedded, such as the git commit hash or maybe a tag. Both of which are static within the constraints that builds are generally reproducible across. However, if it includes timestamps or some generated metadata, then I wholeheartedly agree with your concerns.


Ok, if GO is soulless what languages have a soul? Just curious.


> strings.Cut, where is the doc for this? How can this not be Googled?

https://pkg.go.dev/strings@master#Cut


1. Antipattern

Any chance of a reason?


Why is it an anti pattern?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: