I agree on the cause of waste, but I'm skeptical AI will move the needle on a solution. The longer I work in big companies the more I realize that things happen because someone has the strength of will combined with the right levers and incentives to make it happen despite organizational inertia. AI is potentially a powerful tool when judiciously used by those who understand what they're doing, but it can also be an amplifier of noise that filibusters real progress when hands-on leadership and good judgement is in short supply.
A lot of what I'm seeing feels like LLMs are "perfect" bridge-to-nowhere enablers. They are the ultimate Yes Men and give upper management an impression that nowhere not only needs plenty more bridges but that the bridges are cheap to build and easy to use and may already have been built with magic strong enough to attract leprechauns and their pots of gold to the end of the bridges.
Too much of my career has been in the "Legacy Code" mines, and the LLMs seem like the worst pipeline for expanding "Legacy Code" faster than ever. Some of them may as well be called "Legacy Code-as-a-Service". I'm not worried that my "Legacy Code" skills are going to get out-of-date, I'm worried they are going to be in higher, uglier demand when some of these bridge-to-nowhere factories finally get shutdown or at least slowed down for some of the real maintenance costs to finally show up on companies' bottom lines.
I wonder how much this is a factor of the widespread mental health malaise that is often attributed to tech these days? Certainly plenty of factors to go around, but consider the connotation of "scrolling" and how common it is a default replacement to boredom in modern life and suddenly it seems quite insidious.
Super insightful. I feel the same way. I can't mentally "conclude" my read for the day, because there is always just One More Article that is just under the threshold.
An extension of Fear of Missing Out, basically. And yes, I think it causes mental exhaustion and might be directly related to some mental disorders that we have really yet to understand.
Huh? Blocking senders as you surf the web based on what you want to see is a completely different problem from blocking requests to your server based on what the intent of the requester is. I can think of no way these problems are similar except in the very narrow technical sense of maintaining a blocklist and attaching it to a request cycle, which is really not the hard part of either of these problems.
It’s odd to me that someone who reached E9 at Meta seems so unaware of the capriciousness and political aspect of promotions at high levels. These are coveted and extremely rare roles that many talented and ambitious people are earnestly working towards and most will never achieve just by the numbers. I see nothing wrong with ambition but to measure your career by level is a reductive perspective that can undermine the specific accomplishments and relationships you have built.
This obsession with levels is something I see with many junior engineers who have gone through school chasing shibboleths of success. Stanford, MIT, always chasing the well-defined carrot. But often failing to understand there’s a pretty low ceiling to success on the well trod path. Real value comes from solving novel and ambiguous problems without anyone telling you how to do it. You have to realize those levels are meant to capture something about how the most effective technical leaders operate, it’s not a roadmap or a checklist for you to cargo cult. The things that matter are the quality of the work you do and the perception thereof by those in power. “Levels” are just secondary HR structure to manage the masses of employees in large corporations, and if you think too much about them you’re taking your eyes off the ball.
> We've all experienced that stare when talking to someone who does not have sufficient depth of understanding in a topic.
I think you're really putting your finger on something here. LLMs have blown us away because they can interact with language in a very similar way to humans, and in fact it approximates how humans operate in many contexts when they lack a depth of understanding. Computers never could do this before, so it's impressive and novel. But despite how impressive it is, humans who were operating this way were never actually generating significant value. We may have pretended they were for social reasons, and there may even have been some real value associated with the human camaraderie and connections they were a part of, but certainly it is not of value when automated.
Prior to LLMs just being able to read and write code at a pretty basic level was deemed an employable skill, but because it was not a natural skill for lots of human, it was also a market for lemons and just the basic coding was overvalued by those who did not actually understand it. But of course the real value of coding has always been to create systems that serve human outcomes, and the outcomes that are desired are always driven by human concerns that are probably inscrutable to something without the same wetware as us. Hell, it's hard enough for humans to understand each other half the time, but even when we don't fully understand each other, the information conferred through non-verbal cue, and familiarity with the personalities and connotations that we only learn through extended interaction has a robust baseline which text alone can never capture.
When I think about strategic technology decisions I've been involved with in large tech companies, things are often shaped by high level choices that come from 5 or 6 different teams, each of which can not be effectively distilled without deep domain expertise, and which ultimately can only be translated to a working system by expert engineers and analysts who are able to communicate in an extremely high bandwidth fashion relying on mutual trust and applying a robust theory of the mind every step along the way. Such collaborators can not only understand distilled expert statements of which they don't have direct detailed knowledge, but also, they can make such distilled expert statements and confirm sufficient understanding from a cross-domain peer.
I still think there's a ton of utility to be squeezed out of LLMs as we learn how to harness and feed them context most effectively, and they are likely to revolutionize the way programming is done day-to-day, but I don't believe we are anywhere near AGI or anything else that will replace the value of what a solid senior engineer brings to the table.
I am not liking the term "AGI". I think intelligence and understanding are very different things and they are both required to build a useful tool that we can trust.
To use an image that might be familiar to lots of people reading this, the Sheldon character in Big Bang Theory is very intelligent about lots of fields of study and yet lacks tons of understanding about many things, particularly social interaction, the human impact of decisions, etc. Intelligence alone (AGI) isn't the solution we should be after. Nice buzz word, but not the solution we need. This should not be the objective at the top of the hill.
I've always distinguished knowledge, intelligence, and wisdom. Knowledge is knowing a chair is a seat. Intelligence is being able to use a log as a chair. Wisdom is knowing the log chair will be more comfortable if I turn it around and that sometimes it's more comfortable to sit on the ground and use the log as fuel for the fire.
But I'm not going to say I was the first to distinguish those word. That'd be silly. They're 3 different words and we use them differently. We all know Sheldon is smart but he isn't very wise.
As for AGI, I'm not so sure my issue is with the label but more with the insistence that it is so easy and straight forward to understand. It isn't very wise to think the answer is trivial to a question which people have pondered for millennia. That just seems egotistical. Especially when thinking your answer is so obviously correct that you needn't bother trying to see if they were wrong. Even though Don Quixote didn't test his armor a second time, he had the foresight to test it once.
In the corporate world ecosystem and fungibility of programmers are the top priority. The only way Elixir will get traction there is by Elixir companies literally growing to Fortune 500 size and showing the language/ecosystem is viable at that scale. Even then I doubt the advantages of Elixir will move the needle for that type of company because once you scale up the challenge is 99% people, teams, and communication; the elegance and efficiency of the code and ops don't matter much in those types of environments.
He didn't say inertia, he said mature. It take a while for a language/platform to develop a solid ecosystem and stabilize. That absolutely has value, and is something you can't get out of a new system no matter what novel problems it solves.
As far as new apps go, yeah I think it's still pretty optimal for a huge swath of web apps, especially for early incubation when you have <20 engineers and you need to move quick. Not if you need web sockets, or other concurrency / performance critical applications though.
Yes, I'd also think it's a great idea to build new apps in a language currently undergoing a distribution & supply chain war between the interested parties.
Sure, but there's no single person that believes engineers must be all those things, you're conflating many opinions to form an impossible litmus test. In reality as the GP pointed out: great engineers don't all fit the same mold, and frankly neither do all jobs and hiring manager expectations.
I agree with the quote as stated, but I would refactor it for a more powerful insight.
I do believe order of magnitude improvements in productivity and reliability are possible, but they don't come from technology or management technique, they come from simplicity. The simplest possible thing that gets the job done can be infinitely more reliable than whatever baroque contraption comes out of typical fog-of-war enterprise environments. The trick is having the judgement to understand what complexity is essential and how to distill things down to the most valuable essence. This is something AI will never be able to do, because the definition of value is in the eye of the human beholder.
reply