I see this advice a lot in various forms. I think people are probably too conflict averse on average so there is some merit to it but there are limits. I feel like there are a lot of times in my life where just moving on or being diplomatic has been the right call.
The manager example is a good case study. There are a lot of examples here where there might be genuine repercussions for raising an issue with a manager. I wouldn't give this as blanket advice.
Unfortunately, I don't think there's a simple rule about whether or not you should raise an issue and it needs to be decided case by case.
This is such an interesting comment thread because people have such wildly different opinions and from my perspective the entire disagreement just comes from company size.
I am a "CTO" and I always put that in air quotes because I have one direct report and I spend the lion's share of my time doing IC work. I know what I do is not what people picture when they hear the title and I feel weird saying it. I use it because I do have to make the strategic technical decisions, there is no one else. When people are marketing technical B2B SaaS I am the one they are looking for.
From my perspective there just isn't nearly enough for me to do as a CTO to justify me not coding. If I were to hire someone just to manage them that would be an unjustifiable expense at this point. But I also get that as soon as we get to a reasonable size this would be totally unsustainable.
This sounds like myself as well. We are a small dev team of 6 (in a company of 30), however I also have a partial ownership stake in the company. Even though I spend a significant part of my time on "CTO" style work (client meetings, market assessments, product overviews, roadmap planning, third party collaboration, etc.) there also isn't near enough of that to fill my time or justify my salary. I code and review like my team does, but I also oversee technical direction for our whole portfolio and the responsibility for that technical success or failure rests on me. As we grow the coding will decrease I'm sure, but I see a lot of people here criticizing from a perspective of larger companies where a CTO would be a full time responsibility. In our situation the title (as much as I often dislike it) represents my level of responsibility, if not directly the full scope of my role.
I am pretty skeptical of how useful "memory" is for these models. I often need to start over with fresh context to get LLMs out of a rut. Depending on what I am working on I often find ChatGPT's memory system has made answers worse because it sometimes assumes certain tasks are related when they aren't and I have not really gotten much value out of it.
I am even more skeptical on a conceptual level. The LLM memories aren't constructing a self-consistent and up to date model of facts. They seem to remember snippets from your chats, but even a perfect AI may not be able to get enough context from your chats to make useful memories. Things you talk about may be unrelated or they get stale but you might not know which memories your answers are coming from but if you did have to manage that manually it would kind of defeat the purpose of memories in the first place.
> When just a few years ago, having AI do these things was complete science fiction!
This is only because these projects only became consumer facing fairly recently. There was a lot of incremental progress in the academic language model space leading up to this. It wasn't as sudden as this makes it sound.
The deeper issue is that this future-looking analysis goes no deeper than drawing a line connecting a few points. COVID is a really interesting comparison, because in epidemiology the exponential model comes from us understanding disease transmission. It is also not actually exponential, as the population becomes saturated the transmission rate slows (it is worth noting that unbounded exponential growth doesn't really seem to exist in nature). Drawing an exponential line like this doesn't really add anything interesting. When you do a regression you need to pick the model that best represents your system.
This is made even worse because this uses benchmarks and coming up with good benchmarks is actually an important part of the AI problem. AI is really good at improving things we can measure so it makes total sense that it will crush any benchmark we throw at it eventually, but there will always be some difference between benchmarks and reality. I would argue that as you are trying to benchmark more subtle things it becomes much harder to make a benchmark. This is just a conjecture on my end but if something like this is possible it means you need to rule it out when modeling AI progress.
There are also economic incentives to always declare percent increases in progress at a regular schedule.
Will AI ever get this advanced? Maybe, maybe even as fast as the author says, but this just isn't a compelling case for it.
Any physical process can be interpreted as computation. Computation is in the eye of the beholder. Interpreting life as computation doesn't really add anything new we are just describing a model that we came up with.
In general, I think the dependency hate is overblown. People hear about problems with dependencies because dependencies are usually open source code used by a lot of people so it is public and relevant. You don't hear as much about problems in the random code of one particular company unless it ends up in a high profile leak. For example, something like the heartbleed bug was a huge deal and got a lot of press, but imagine how many issues we would be in if everyone was implementing their own SSL. Programmers often don't follow best practices when they do things on their own. That is how you end up with things like SQL injection attacks in 2025.
Dependencies do suck but it is because managing a lot of complicated code sucks. You need some way to find issues over time and keep things up to date. Dependencies and package managers at least offer us a path to deal with problems. If you are managing your own dependencies, which I imagine would mean vendoring, then you aren't going to keep these dependencies up to date. You aren't going to find out about exploits in the dependencies and apply them.
> imagine how many issues we would be in if everyone was implementing their own SSL.
No, the alternative is to imagine how many issues we would be in if every project pulled in 5 different SSL libraries. Having one that everybody uses and that is already installed on everyone's system is avoiding dependency hell. Even better if it's in stdlib.
I am extremely skeptical of this mathematical model to predict history thing. There's just not enough history to do it and you bake in your biases when you go through the qualitative historical record and try to assign it to quantities. A lot of people analyze history and claim they figured it out and they've come to different conclusions and none of them have made reliable, specific conditions. If you say something bad will happen at some point in the future you'll probably be right but it's not enough to call it science.
Nevermind the lack of data - what even would be the limits of knowledge in such a model? If it was widely believed that society will collapse at some point in the next 30 years, how would human behavior change in response? How would that affect the original prediction?
-It’s a probabilistic model, so it only predicts the odd of a collapse
- Their main contribution was the creation and curation of a super detailed historical database: the Seshat. It spans almost 10000 years of human history with more than 400 polities from 30 regions around the world, using over 1,500 variables. Based on this data, Turchin & al devised the mathematical model for the prediction.
- One key area is to find surrogate data when others are not available. For ex. body size could be used to describe the nutrition and economic situation of the population.
- In 2010, Nature asked experts and super-forecasters for their prediction of 2020. Only Turchin predicted the coming collapse of America.
Elite overproduction is an interesting topic and putting aside any suggestion that it's a precise mathematical predictor, it obviously creates societal problems.
That is - you've created a large class of intelligent achievers with nothing for them to do. Arguably that just naturally produces increasing societal upheaval. Whether that means revolution or just chaotic increasingly populist elections is a matter of degrees.
There is always something for a large class of intelligent achievers to do. The failure to put them to work is more of a societal failure than it is an indictment of the education system. (Maybe AI will change this, but only in the same way that it changes every part of our societal model.)
> There is always something for a large class of intelligent achievers to do. The failure to put them to work is more of a societal failure than it is an indictment of the education system.
This doesn’t quite resonate with me, because I’ve lived through it and seen it happen over and over again even in the most functional of societies.
Oversimplifying a bit, let’s call intelligent achievers elites. There is often a mismatch between elite supply and elite slots, and by definition elite slots are scarce — no matter how well your society is functioning.
Elite slots scale with the maturity and breadth of the economy. The U.S., with its size and diversity, has a much larger pool of elite slots than most countries. That’s one reason I moved here.
By contrast, in Canada (a country I love deeply), most Ph.D.s end up underemployed or they leave, because their skills simply aren’t needed at the level of specialization they were trained for. Some jobs only make sense when you have enough scale to support them — and without that scale, those elite positions just don’t exist.
Can intelligent achievers pivot to something else, like entrepreneurship? Sure, but in a smaller economy, the options are much more limited, even if they do a startup and invent new categories. They can also accept underemployment. There are inherent constraints in an economy due to natural factors like scale, geography, etc.
(My understanding is that Taiwan is in this situation -- highly educated people, limited industries that can employ them. Some move abroad, but many just curb their ambitions and try to get by with low pay and accept their lot in life, striving only for "little joys" they can afford like bubble tea and inexpensive street food)
Can you name some examples? Virtually every major revolution or civil war I can think of, would involve intelligent achievers who've made it. In fact, the core of the rebellion would be a class that's often vital for the exercise for political power, but won't be allowed access to that same power.
English gentry, New England merchants, nobles of the robe, army officers, etc.
Only the Russian revolution would involve people who were nobodies before it, but they took charge after the disaffected elites that came to power in February spend most of 1917 undermining each other.
Even the Russian Revolution was lead by elites:
- Kerensky was lawyer
- Lvov was an aristocrat
- Lenin, Trotsky were highly educated and known for intellectual brilliance
The core of Russian revolution were highly educated nerds who would cancel their friends over slight differences in understanding of obscure socioeconomic theories
I find the way people talk about Go super weird. If people have criticisms people almost always respond that the language is just "fine" and people kind of shame you for wanting it. People say Go is simpler but having to write a for loop to get the list of keys of a map is not simpler.
I agree with your point, but you'll have to update your example of something go can't do
> having to write a for loop to get the list of keys of a map
We now have the stdlib "maps" package, you can do:
keys := slices.Collect(maps.Keys(someMap))
With the wonder of generics, it's finally possible to implement that.
Now if only Go was consistent about methods vs functions, maybe then we could have "keys := someMap.Keys()" instead of it being a weird mix like `http.Request.Headers.Set("key", "value")` but `map["key"] = "value"`
I haven't use Go since 2024, but I was going to say something similar--seems like I was pretty happy doing all my Functional style coding in Go. The problem for me was the client didn't want us to use it. We were given the choice between Java (ugh) and Python to build APIs. We chose Python because I cross my arms and bite my lip and refuse to write anymore Java in these days of containers as the portability. I never really liked Java, or maybe I never really like the kinds of jobs you get using Java? <-- that
Fair I stopped using Go pre-generics so I am pretty out of date. I just remember having this conversation about generics and at the time there was a large anti-generics group. Is it a lot better with generics? I was worried that a lot of the library code was already written pre-generics.
The generics are a weak mimicry of what generics could be, almost as if to say "there we did it" without actually making the language that much more expressive.
For example, you're not allowed to write the following:
type Option[T any] struct { t *T }
func (o *Option[T]) Map[U any](f func(T) U) *Option[U] { ... }
That fails because methods can't have type parameters, only structs and functions. It hurts the ergonomics of generics quite a bit.
And, as you rightly point out, the stdlib is largely pre-generics, so now there's a bunch of duplicate functions, like "strings.Sort" and "slices.Sort", "atomic.Pointer" and "atomic.Value", quite possible a sync/v2 soon https://github.com/golang/go/issues/71076, etc.
The old non-generic versions also aren't deprecated typically, so they're just there to trap people that don't know "no never use atomic.Value, always use atomic.Pointer".
> Now if only Go was consistent about methods vs functions
This also hurts discoverability. `slices`, `maps`, `iter`, `sort` are all top-level packages you simply need to know about to work efficiently with iteration. You cannot just `items.sort().map(foo)`, guided and discoverable by auto-completion.
Ooh! Or remember when a bunch of people acted like they had ascended to heaven for looking down on syntax-highlighting because Rob said something about it being a distraction? Or the swarms blasting me for insisting GOPATH was a nightmare that could only be born of Google's hubris (literally at the same time that `godep` was a thing and Kubernetes was spending significant efforts just fucking dealing with GOPATH.).
Happy to not be in that community, happy to not have to write (or read) Go these days.
And frankly, most of the time I see people gushing about Go, it's for features that trivially exist in most languages that aren't C, or are entirely subjective like "it's easy" (while ignoring, you know, reality).
This just makes it even more frustrating to me. Everything good about go is more about the tooling and ecosystem but the language itself is not very good. I wish this effort had been put into a better language.
> I wish this effort had been put into a better language.
But it is being put. Read newsletters like "The Go Blog", "Go Weekly". It's been improving constantly. Language-changes require lots of time to be done right, but the language is evolving.
I don't agree that it is because of the "quality" of the video. The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer. It is interesting because it has a consistent perspective. It is possible AI art could one day be indistinguishable but for people to care about it I feel they would need to lie and say it was made by a particular person or create some sort of persona for the AI. But there are a lot of people who want to do the work of making art. People are not the limiting factor, in fact we have way more people who want to make art than there is a market for it. What I think is more likely is that AI becomes a tool in the same way CGI is a tool.
> The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer.
The trouble with AI shit is it's all contaminated by association.
I was looking on YT earlier for info on security cameras. It's easy to spot the AI crap: under 5 minutes and just stock video in the preview or photos.
What value could there be in me wasting time to see if the creators bothered to add quality content if they can't be bothered to show themselves in front of the lens?
What an individual brings is a unique brand. I'm watching their opinion which carries weight based on social signals and their catalogue etc.
Generic AI will always lack that until it can convincingly be bundled into a persona... only then the cycle will repeat: search for other ways to separate the lazy, generic content from the meaningful original stuff.
The manager example is a good case study. There are a lot of examples here where there might be genuine repercussions for raising an issue with a manager. I wouldn't give this as blanket advice.
Unfortunately, I don't think there's a simple rule about whether or not you should raise an issue and it needs to be decided case by case.