There's no moat given that open source models are almost as good already. And it's yet to be proven that LLMs in particular can be reliable enough to actually provide significant return on investment.
Then there's the legal issue. People are not going to accept the idea of "you can't sue me, the AI decided", and so a not perfect model is not going to be deployed in any field of consequence, because the idea that you can offload liability on models is not going to fly, and is ethically reprehensible in my view and in the view of most serious people working in regulated industries. It actually introduces new legal issues, a jury is far less likely to side with an AI than it is with a person who simply made a mistake. There are numerous social reasons why the liability concerns here are legitimate and concerning and actually different for companies than if a human had been making the decisions.
Therefore, the models have to be perfect, or they have to be checked by hand by experts at every step (in which case, there is no return on investment, you haven't really automated anything).
That's the classic conundrum of "I may as well have done it myself given I had to check that code line by line anyway" I see in my own work, and the same is going to apply for non-perfect models in any field of sufficient consequence that there is a hope for profitability (and usually, these fields being consequential enough for potential profitability also implies a potential for liability).
Anybody that works as a programmer in finance/energy/healthcare/government or other parts of the "real economy" could tell you this years ago. The perception that AI could be used to automate significant work was dead pretty shortly after arrival. The only industry that continues on with the charade is the industry with a vested interest in selling and making dubious promises about AI products.
That doesn't mean I don't think LLMs are cool or that they are totally useless. For endeavours that were limited in their profitability anyway, it may very well have some use cases. I just don't see a viable business model for any large companies except Nvidia or AMD or Broadcom and others making money selling data center equipment. Maybe ads and agitprop, but that seems to be a sort of parasitic relationship going on with social media and the internet and one wonders how long it can continue before canabalizing itself.
'Around 2002, a team was testing a subset of search limited to products, called Froogle. But one problem was so glaring that the team wasn't comfortable releasing Froogle: when the query "running shoes" was typed in, the top result was a garden gnome sculpture that happened to be wearing sneakers. Every day engineers would try to tweak the algorithm so that it would be able to distinguish between lawn art and footwear, but the gnome kept its top position. One day, seemingly miraculously, the gnome disappeared from the results. At a meeting, no one on the team claimed credit. Then an engineer arrived late, holding an elf with running shoes. He had bought the one-of-a kind product from the vendor, and since it was no longer for sale, it was no longer in the index. "The algorithm was now returning the right results," says a Google engineer. "We didn't cheat, we didn't change anything, and we launched."'
The thing is that AI is completely unpredictable without human curated results. Stable diffusion made me relent and admit that AI is here now for real, but I no longer think so. It's more like artificial schizophrenia. It does have some results, often plausible seeming results, but it's not real.
Telcos used to monitor their copper outside plant for moisture. This was called Automatic Line Insulation Testing in the Bell System. The ALIT system ran in the hours before dawn. It would connect to each idle line, and apply, for tens of milliseconds, about 400 volts limited to very low current between the two wires, and between each wire and ground, measuring the leakage current. This would detect moisture in the cable. This was dealt with by hooking up a tank of dry nitrogen to the cable to dry it out.
Here's a 1960s vintage Automatic Electric line insulation test system at work in a step-by-step central ofice. [1] Here's the manual for automatic line insulation testing in a 5ESS switch.[2] 5ESS is still the major AT&T switch for copper analog phone lines. After that, it's all packet switching.
For fiber, of course, moisture doesn't affect the signal.
This led to an urban legend: "bell tap". While Western Electric phones were designed to not react to the ALIT test signal, many cheap phones would emit some sound from the "ringer" when the 400V pulses came through, some time before dawn.
I’m on the edge of concluding that the iPhone was actually a bad thing and that the mobile revolution is on balance bad for humanity.
* There is something addictive about this form factor plus the touch interface, especially when combined with the way apps have designed to maximize the effect. This is by far the worst thing about this platform.
* They are “too” easy to use. Before the iPhone and iOS (and a bit later Android) people were becoming “computer literate.” Remember that phrase? They were learning to actually use information technology. Mobile halted that process.
* It’s a medium for consumption, not production. Input with them is slow and cumbersome and best suited for short sound bites like tweets or TikTok videos.
* They bias media toward short interaction and therefore shallow content. You can't express complex ideas that way.
* They constantly interrupt and combined with short form interaction and addiction are destructive to attention.
* They were built from the ground up to be walled gardens that crush experimentation and innovation.
* They are surveillance machines, and lend themselves to significantly more intrusive surveillance than most other tech except maybe cars… and cars are just trying to ape the mobile surveillance model.
* The ecosystem is toxic and built around exploiting the user through addiction, surveillance, and impulsive purchasing.
* Since they are so constrained, they supercharged the trend of all compute moving to the cloud where you own nothing and have no privacy.
The phrase “cigarettes of the mind” seems to fit.
All the useful things my phone does could be done with a much simpler device providing a map, a way to text and read email, and a browser. Most everything else is superfluous and a large amount of it is actively harmful, especially to young people and those not tech savvy enough to avoid the predatory aspects of the system.
If you could order Uber or Lyft through a browser I would be tempted to get one of those hipster minimal phones. There’s no technical reason you couldn’t. They just want you to use the app to maximize tracking.
Telecommunications were the first application for nearly every principle and technology now used in computers.
In the 1930s, Claude Shannon was trying to optimize the phone network's automatic switching fabric built out of millions and millions of relays when he, more or less by accident, proved that switching circuits are equivalent to Boolean algebra, in a deep sense, and so the appropriate arrangement of switches, can compute anything that can be computed.
The first major commercial application for the vacuum tube was not in radio, but in amplifying long-distance telephone links. They would quickly find use in frequency multiplexing telephone lines, too. [1]
Similarly, one of the first applications of vacuum tubes as a high-speed electronic switch was for multiplexing telegraph lines, and much of the initial research on high-reliability low-power tubes able to switched fully on without damage began, with the telephone and telegraph companies.
The first computers were built out of relays and vacuum tubes specially designed for telecommunications.
The transistor was invented at Bell Labs. The planned application was the telephone network. The very same year, the theory behind modern error correcting codes was developed at Bell Labs too. These find extensive use today in data storage in computers, but the original goal was arbitrarily accurate transfer of telegraph data over noisy links.
Digital transmission of photographs dates to the late 1920s too; they wanted to send images over extremely noisy long-distance telegraph links. So record the intensity of light over each spot of the image on paper tape. Then send the tape over the wire. And reverse on the other end. [2]
I've come to understand the computer revolution as really the story of the telecommunication revolution. Computers serendipitously came about with advances in telecommunications, enabled by the same technologies, and indeed, often driven by the needs of telecommunications. (One of the very first applications for a computer in an embedded context - dating to the late 1950s in experiment - the telephone exchange, of course.)
There seems to be a certain consensual hallucination that happens with these acquisitions. The acquirer says "you're amazing, you do great work, we don't want to change a thing, you'll still be completely autonomous, we'll just help you accelerate". And there's a good reason for it, the employees came to work for Slack, not Salesforce and if they all quit then the acquisition will have been a waste. So Salesforce makes all the same promises, but at the end of the day there's reality. The second Salesforce bought Slack, Slack ceased to exist, and all the workers now work for Salesforce. There's no culture clash between Slack and Salesforce. There's a Slack culture that once existed, and now there's Salesforce culture the company those workers now work at. You can embrace it, you can be unhappy, and you can leave. But remember, you're at Salesforce now and those other 70,000 workers - they joined the Salesforce culture, not the Slack culture and they sure as hell didn't buy you because of your culture.
And all those promises about Slack being independent and autonomous? Well sure, that was the plan, but now Slack is a part of Salesforce and Salesforce makes decisions that make sense for Salesforce, not Slack. So no, you don't get your own sales team, and no, you don't get your own customer support team. You're part of Salesforce now, it doesn't make sense. Oh and no, you aren't going to get the benefits you were getting at Slack, you don't work at Slack, you work at Salesforce, and hey! Some of those benefits may genuinely be better. Those HR reps that sorted all your perks at Slack? They're working in a Salesforce HR department now (if they weren't fired) and they're working to give you the Salesforce benefits.
It's a tough message but when someone buys the company you work for, you no longer work at the old company, you work at the new company, no matter what anyone says.
And what a fate to impose on your children. You personally may choose the hardest road, but your unborn children would have no choice in the matter, and those future children would be essential to a permanent colony.
Moral dilemma: What if children conceived and born on Mars (if that's even medically viable) decide that Mars totally sucks, and they want to migrate to Earth?
Do we build a wall around Earth and stop them as "illegal aliens"?
Seriously, life on Earth is likely to be much better, much easier than life on Mars, so there's going to be a desire among a significant number of Martians to leave, just as there's a desire among Earthlings to move for a better life. Then what? In order to keep the Mars colony sufficiently "staffed", do you turn it into a totalitarian regime, where nobody can leave? No personal freedom?
How many of us want to sacrifice our lives and our happiness just to be a "backup plan"?
I rarely if ever hear space enthusiasts talk about the morality of a Mars colony. They act like it's just a technological problem, and it's somehow a given that we can put large numbers of humans wherever we want, whether or not the humans themselves would want that.
Yes and no. I'd say OS theory is also helpful when writing programs that run on an OS.
In the same way network theory is helpful when writing programs that communicate on a network.
I find I've retained a lot of my CS theory, but in a "quiet" way. [1] It only really surfaces when I'm talking about things with colleagues, and I take some information for granted, but ultimately work out they don't have the same understanding.
I do a fair bit of database work, and all that normalisation, foreign keys, relationships knowledge is "just there". On the other hand I only very occasionally have cause to do graphics work, so I remember the very basics, but not much else.
I also remember we did things, which I haven't used for 30 years, but which "holds no fear". I recently had cause to consider writing some code in assembly, and it may or may not happen, but either way I have no "fear" of what it entails, its "just assembly code."
So to the OP I say, your CS degree isn't about facts. It's about an understanding of how things fit together, and in your career you'll use that knowledge in ways you won't even notice. So relax, it's not like you have to write a refresher exam to keep up :)
[1] today I'd get basically nothing in an exam, I don't remember all the memory-rote stuff, like naming the 7 OSI networking levels, or some specific algorithm, or the theory of P vs NP complete etc. But I actually tangentially use that knowledge every day.
The cult of move fast and break things meant doing it the old fashioned way were wrong.
The likes of Google and Facebook, as well as many startups inspired by them, have been an excellent demonstration of what commercial success looks like in a modern software business. In particular, they have (unfortunately, perhaps) demonstrated that quality really doesn't matter all that much in their target markets. In the early days, it turns out that users will forgive almost anything breaking if you have an attractive product. Later on, you can still break things often and almost completely ignore customer service as long as your lock-in effects are strong enough to keep most of your users anyway.
Even if you do need higher quality standards in your market, the lean startup view is to build anything you can and ship your MVP as quickly as possible. If and when you find product-market fit then you can pay off your "debts", potentially including big changes like rewriting your whole application or firing groups of early customers, because by that time you have the money and resources and user base to do these things without it hurting your bottom line too much. Again, from a business point of view, this strategy makes a lot of sense if you're operating in markets where these rules apply.
What I don't understand though is why so many people who are operating in markets where quality does matter seem to idolise big names like Facebook and Google and the people who work there. They have a high profile and they pay high salaries and their hiring processes are well known for being challenging, but I see little evidence that they effectively translate any of those apparent advantages into producing either better quality or more innovation than anyone else in the industry. If anything, given the enormous resources they have at their disposal, the opposite seems to be true.
Many developers never get past a composition first mindset. That is beginner shit. It’s putting legos together without the instruction manual only to stress about how the pieces click together instead what the final state is. You end up with lots of decoration and shit everywhere. In the military we call this not seeing the forest for the trees.
As a senior myself in the corporate world who enjoys writing open source software there is a world of difference between people build products and people that just write code at work. It’s a difference in velocity, planning, and volume. When you are writing software on your own where you aren’t paid for your time you tend to be in far less of a hurry and yet produce 10x as much. As a corporate developer you are paid to hurry and complete tasks and in many cases with incomplete documentation, which is just writing code and sometimes guessing at what you need. When you are on your own success is measured in product delivery and feature completion, which is not just writing code.
Another major difference is communication. If you communicate poorly your open source software is dead on arrival. End of story. You should never have to guess at your business requirements, APIs, steps to reproduce defects, and various other things. The more you have to guess the more mistakes you will make. The only path to redemption is to communicate more precisely than those around you.
Perhaps the worst distinction is design by committee. Frequent long meetings just waste peoples time. The time that’s left over means hurry harder. A more productive direction is for some fool to sit a position of leadership and make arbitrary decisions. A wrong decision can be fixed, but wasted time is gone forever.
You can’t really explain this to a developer who has never written code outside the office because they refuse to believe what they have never experienced. The hurried low productivity churn is all they know.
Uneven distribution of skills and knowledge is a wrinkle that everybody has to grapple with in their careers.
I think what the post is describing is programmers who have a fairly narrow vision in which their particular talents and knowledge take on disproportionate value. I think these programmers tend to be the happiest workers, and they tend to develop themselves to the fullest, because they are fueled by their outsized belief in the value of their particular skills. This can bring a great positive energy, but the downside is that they tend to turn into hero programmers and create codebases that are impossible for others to work effectively in, and they can sabotage the confidence of people around them. If they are so inclined, they can be very effective bullies, but even if they are nice people, if they really believe in the vital necessity of an ability they have that others don't, they won't appreciate the value of others' work. For example, being a prodigy at writing fast, low-level code is a good thing, but if you develop a blanket belief in the importance of code being extremely fast at a low level, you will mistrust and devalue code written by other members of your team. The more important the project, the more you'll feel that you need to write the important parts yourself. Your teammates will notice, in subtle or unsubtle ways, that you don't believe their code is good enough. You'll also fill the codebase with code that only you can properly maintain.
Probably the most dangerous strength for this kind of programmer to embrace is their ability to handle a high degree of complexity. In almost any software engineering context, this will result in the generation of unnecessary complexity that one team member thrives in and other team members drown in. The opportunities to "improve" a codebase at the cost of a little additional complexity are infinite. Other team members' work will have far fewer of these improvements, and this will be painted in a negative light. When people argue against marginal improvements on the basis of added complexity, the hero programmer might feel that they are simply inferior programmers trying to devalue their talent out of jealousy.
In short, working at the absolute limit of your greatest strength and foregrounding that in the work of your team is extremely fulfilling, but a great way to sabotage your team.
At the opposite end of the spectrum are people who are too quick to write off their strengths as an unimportant part of a big whole. They are less happy, and more likely to plateau or even fall off in performance over their career, as their awareness broadens and their own self-assessment shrinks in proportion. When they come into an environment that is shaped by the talents of others, they might feel paralyzed and valueless, instead of believing that the team can and should change to take advantage of their skills as well. These programmers will chronically disappear and underperform.
I think the ideal is not a happy medium between these extremes but rather a change in approach. Instead of foregrounding your strength in the work of the team, can you background it? If you're a performance expert, can you design an application so that others working on it only need to follow a few simple rules to maintain the high performance? Can you make it trivial for other programmers to check the performance impact of their code changes? Can you instrument the application so that performance issues can be quickly identified and diagnosed? Can you turn performance into something that other team members barely have to think about?
This requires you to value other people not knowing what you know about performance, to see that as a cost savings. Instead of thinking, "My knowledge is valuable because I am the only person my team can trust to hack on this system without introducing performance regressions," switch it to, "My knowledge is valuable because I enable my team to confidently hack on this system without agonizing over performance."
Armstrong's death was one of the few that I was sort of hit by, despite not knowing him personally. I've watched so many of his talks, read as much as I could from him, and I've generally found him to be quite an inspiration and seemingly a great guy.
More on the point of the article, it's sort of fun to think about the fact that "OO Sucks" is sort of ironic given Alan Kay's initial description of OO was closer to actors than what we think of OO today (he acknowledges this at a later time).
As Kay puts it:
I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea.
The big idea is "messaging"
[..]
The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.
[0][1]
This is very much in the spirit of Armstrong's quote in the article:
> Since functions and data structures are completely different types of animal it is fundamentally incorrect to lock them up in the same cage.
Armstrong talked a lot about how shared mutable state was wrong on a fundamental level - it "breaks reality(/causality)" and that sort of thing. Again, sort of fun to think about the fact that the core ideas with actors seemed to have an origin in an early focus on asynchronous 'cell'-like computers, like the JOHNNIAC in 1953[2], even though the foundation of the model wasn't named or formalized until the 70s.
"Its designers began with the hope of stretching the mean free time between failures and increasing the overall reliability by a factor of ten". Systems like JOHNIAC where IAS machines - asynchronous CPUs. These worked through causality, not synchronization.
In Armstrong's thesis[3] 'Making reliable distributed systems in the presence of software errors' he references papers like [4] 'Why do computers stop and what can be done about it? Technical Report 85.7, Tandem Computers, 1985.', which talks again about isolated processes and transactions as a foundation to reliable computing - over 30 years later.
This whole area is so deeply fascinating with a century of repeating, refining ideas, and I found that just by reading what I could from Armstrong I had a sort of rough guide through this area. There are these cool ties to early ideas about AI, and I guess a lot of people though that languages should model life, and later Kay talks about this as well, and funny enough AWS now has "cell based architecture" - their discipline built around isolation and fault tolerance. A lot of this is sort of just random connections from jumping from paper to paper - but I just found it all really cool.
Reading this thread, it almost seems like people don't know who Joe Armstrong is? Or at least they've missed a lot of the point. This isn't an "X vs Y" from some rando, he built Erlang. It's also not about functional programming.
I highly recommend reading what he has to say, and watching his talks.
> Things often make sense at an very broken down individual level
This is how humans process new information. Big ideas often cannot be grasped entire on first encounter, c.f. the parable of the blind men and the elephant.
> I’m just not good at many of said essential things
You're not good yet.
I've been building products & services for forty years. I look back on my ideas and level of understanding from the first decade or two, and I judge myself super naive, often overreaching for "big picture" enlightenment before I was even connected to the details.
You may have
* unrealistic expectations of yourself;
* imposter syndrome;
* an attentional difference;
* all four.
What you probably need is a mentor. Where you might find one depends on many factors, mostly not technical.
> this is incredibly difficult field where a mostly immutable attribute (high intelligence) is required
Moderate intelligence is required, but it is tenacity we correlate to outcomes. There are elements of the technology sector where four-sigma intellect moves the needle, but only a few.
“Nothing in this world can take the place of persistence. Talent will not; nothing is more common than unsuccessful people with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent. The slogan "press on" has solved and always will solve the problems of the human race" — Calvin Coolidge
I puzzled about that for years and concluded that tests are a completely different kind of system, best thought of as executable requirements or executable documentation. For tests, you don't want a well-factored graph of abstractions—you want a flat set of concrete examples, each independently understandable. Duplication helps with that, and since the tests are executable, the downsides of duplication don't bite as hard.
A test suite with a lot of factored-out common bits makes the tests harder to understand. It's similar to the worked examples in a math textbook. If half a dozen similar examples factored out all the common bits (a la "now go do sub-example 3.3 and come back here", and so on), they would be harder to understand than repeating the similar steps each time. They would also start to use up the brain's capacity for abstraction, which is needed for understanding the math that the exercises illustrate.
These are two different cognitive styles: the top-down abstract approach of definitions and proofs, and the bottom-up concrete approach of examples and specific data. The brain handles these differently and they complement one another nicely as long as you keep them distinct. Most of us secretly 'really' learn the abstractions via the examples. Something clicks in your head as you grok each example, which gives you a mental model for 'free', which then allows you to understand the abstract description as you read it. Good tests do something like this for complex software.
Years ago when I used to consult for software teams, I would sometimes see test systems that had been abstracted into monstrosities that were as complicated as the production systems they were trying to test, and even harder to understand, because they weren't the focus of anybody's main attention. No one really cares about it, and customers don't depend on it working, so it becomes a twilight zone. Bugs in such test layers were hard to track down because no one was fresh on how they worked. Sometimes it would turn out that the production system wasn't even being tested—only the magic in the monster middle layer.
An example would be factory code to initialize objects for testing, which gradually turns into a complex network of different sorts of factory routines, each of which contribute some bit and not others. Then one day there's a problem because object A needs something from both factory B and factory C, but other bits aren't compatible, so let's make a stub bit instead and pass that in... All of this builds up ad hoc into one of those AI-generated paintings that look sort of like reality but also like a nightmare or a bad trip. The solution in such cases was to gradually dissolve the middle layer by making the tests as 'naked' as possible, and the best technique we had for that was to shamelessly duplicate whatever data and even code we needed to into each concrete test. But the same technique would be disastrous in the production system.
Sigh - people still don't understand this ..... many (many .... many!) years ago I did my first Unix port (very early 80s) it was of V6 to the Vax, my code ran as a virtual machine under VMS - Unix kernel running in supervisor mode in place of the VMS's shell. Ported the kernel and the C compiler at the same time (NEVER do this, it is hell).
Anyway I came upon this comment in the swap in code a bunch of times, never understand it, until I came to it from the right place and it was obvious - a real aha! moment.
So here's the correct explanation - V6 was a swapping OS, only needed a rudimentary MMU, no paging. When you did a fork the current process was duplicated to somewhere else in memory .... if you didn't have enough spare memory the system wrote a copy of the current process to swap and created a new process entry pointing at the copy on disk as if it was swapped out and set that SSWAP flag. In the general swap-in code a new process would have that flag set, it would fudge the stack with that aretu, clear the flag and the "return(1)" would return into a different place in the code from where the swapin was called - that '1' that has "has many subtle implications" is essentially the return from newproc() that says that this is the new process returning from a fork. Interestingly no place that calls the swap in routine that's returning (1) expects a return value (C had rather more lax syntax back then, and there was no void yet), it's returned to some other place that had called some other routine in the fork path (probably newproc() from memory).
A lot of what was going on is tied up in retu()/aretu() syntax, as mentioned in the attached article, it was rather baroque and depended heavily on hidden details of what the compiler did (did I mention I was porting the compiler at the same time ....) - save()/restore() (used in V7) hadn't been invented yet and that's what was used there.
Funny, but I took only a few jobs where coding was more than 50% of my day job, but for the most part I've held very eclectic jobs in a predominantly technical or artistic career.
I was reading about neural networks and genetic algorithms in the late 80s and programming in Lisp and C. Mark Watson's book (he's on HN)
"Common Lisp Modules: Artificial Intelligence in the Era of Neural Networks and Chaos Theory"
got me started in coding them rather than just reading the more academic tomes I read in the late 80s.
I am glad I didn't just focus on that then, although, I'd probably have more of a savings now and higher salary.
But not the adventures I managed to have by meeting people in all different types of jobs. And there's something about physical labor that calms me. Sitting at a desk all day seems so antithetical to actually living a life out in the real world and feeling the earth between your toes, the sun and wind on your body while taking in the green things around you!
I was dumpster diving and taking stepper motors out of dot matrix printers in the early 90s down on Wall street while working a 12 to 9 shift taking care of a VAX/VMS and Banyan Vines PC network for an environmental law firm. I used to write reports in Paradox, work on Concordance - a document management system - BRS search routines and queries. I used the steppers for my amateur robotics influenced by Rodney Brooks' subsumption architecture and Mark Tilden's BEAM stuff. I played Doom for the first time at that job!
I was an art handler (Brooklyn Museum), paintings conservator's assistant, 3D FX and animation on SGI intern using Prisms (now Houdini) at a NYC Ad agency/FX company, welder, machinist, CNC coder, technical diver and ropework technician (think IRATA/SPRAT), among the memorable items. I remember being one of the early funders for getting Blender3D open sourced way back when (2001/02), and writing a Python script to turn photos into wooden carvings back in 2002/03 with it on my CNC table. The software was based upon a paper by NASA about 'Shape from Shading'. it alsow churned out G-code for our table after downloading the client's file, and gave me an estimate of machine time and overall cost. This is before 3D printers, and tables used to be in the 6 figures, but this was an $8k 4 ft x 8 ft router table. My partner and I used it to cut out kayak designs we made out of 4 mm Okume plywood and the carvings I did in solid maple in bas relief. We did some urns for pets and pet cemeteries (really! http://thewoodenimage.com. Used to come up on the Wayback Machine without pics or other content). Sad to say, people loved our carvings at boat shows, etc., but nobody bought them. We had crowds at our table, but not a single purchase. Funny cemeteries were our market, and a gentlemen offered me money for the software to make bronze tombstone plaques from family photos!
I worked at a company as a sys admin, and did some coding in C, C#, Java, but mainly my strength was writing front ends for old Italian sheet metal punch machines and translating modern G-code into the 'Octo' machines' proprietary code. This was in C. Putting a temperature sensor in the server room, and have it ping a pager of me and the other sys admins.
I did Christmas display window animations for a display company in NYC as a welder and later as the head of their creative technologies. One Christmas (1996-7?). We had pneumatically operated 4 to 5 ft high Nutcrackers whose arms would go up with a bell in it if a child touched the window where there were five large buttons painted on the back of the window with a capacitive sensor behind it, a BasicStamp (think today's Arduino) running it, queuing the music and triggering the pneumatic valves. Remember right up until this, the internet and information from companies was not as accessible. I had to write letters, emails, and phone call for weeks.
I will probably never retire. I just put a deposit down on a live/work space to use my desktop cnc mill/lathe, 3D printer and electronics equipment (a CNC sewing/embroidery machine too).
I currently work at an engineering firm that specializes in stage machinery and anything entertainment-related. Because I don't code full-time for a living, I code up things in what works best for me and the company I work for. I write them in C, python, factor, J, forth, assembler (micros), and I play with a lot more PLs (Zig, Rust, Haskell, R/RStudio), but I am mainly sticking with C and J for now with Zig as a cross compiler. I avoid python, not because it isn't good or I don't like it, but it doesn't fit in with my preference for terseness and shedding bloat. I am having a renewed interest in C and assembler. I am getting older for some of the more physical work I did, so I am now coding more to try and see if I can make something of it.
I was working in Macau for 6-plus years for the The House of Dancing Water show up until the end of 2015. I have hundreds of technical dives working on hydraulics, air FX, and electrical systems under water. It is almost 10 m deep and was the world's largest commercial indoor pool!
Programming is not so much a career for me as it is simply a tool. The same for my electronics and metalworking experience, or any other skill. Critical thinking, problem solving, and knowing how to provide what's needed or desired are more important. I've always followed my heart or gut on what's next and it is landed me in a good place at my age. Follow your bliss to quote Joseph Campbell!
I'd argue there's no "shield" to begin with. The "shield" was thrown away when a real employer/employee relationship ceased to exist. When, regardless of slightly better offers, employees would stay due to such relationships. When businesses had downturns, they sheltered their dedicated employees by shielding with resources from past successes.
Today, pretty much every medium- to large-scale employer I've anecdotally interacted with is focused only on efficiency and ROI. You're a tool to drive business success and if you're not doing so or a slightly better tool is available: you're gone.
This culture shift means people really don't care about any sort of culture propoganda your business spews to them. They nod, smile, and play along because everyone involved knows it's merely a facade.
It's no longer true that happy employees don't leave jobs. Employees will seek their maximum ROI in the same fashion as businesses do and that culture was established by business practices.
As far as "loving" a job, very few people get paid and get paid well to do things they truly love. This alignment happens but it's not all that common. The fact is, survival in modern life is expensive and people will do whatever they need to do to pay their expenses. They'll do their best to find something enjoyable in that process but the vast majority will settle for something that doesn't make them miserable. The pigeonhole principle almost assures this (IMHO).
The bizarre shield metaphor is part of the never-ending push you see from HR trying to reestablish the illusion of a real employee/employer relationship. Maybe the people in HR really are decent human beings and do care, so they take the defensive position. What they need to realize is they're not really in charge and really just the face of business propoganda from upper management and investors.
> Legacy code that I wrote myself is hard to read.
Sometimes I don’t even recognize me as the author for a while. Realizing I’m reading something I wrote and can’t understand it without studying carefully has been rather surprising and reminds me of the old Kernighan quote “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?”
My goals used to be to write code that looked and felt cool to myself and others, to add features in clever ways with as little change to the function signatures and structure as possible so as to not disturb the architecture. While keeping changes small is a good goal to balance, it’s always possible to be too small and add unnamed concepts and fail to restructure around new concepts when you should. Do that a few times in a row (which I have done) and you end up with architectural spaghetti code that might look clean and modular at first glance but becomes a maintenance nightmare. My goals are now to: - make code so easy to read it’s boring, because if it looks interesting, it’s probably too clever - and to identify and name all the unnamed concepts and refactor often to accommodate new features.
http://deletionpedia.org/ is a small project to cope with this in a positive way. A couple of years ago, when I was frustrated by deletionism once again, I wrote a bot that automatically copies articles that are about to be deleted.
Related is William Gibson's idea of "the Jackpot", or a "multicausal appocalypse", where a large catastrophe is caused not by a single major factor but by several smaller ones accumulating and interacting over time. Gibson talks about it in https://vimeo.com/116132074.
A good book in this space is Normal Accidents[0], which describes (among others) the impressive cumulative failure that caused the Three Mile Island incident. Fascinating read and applicable to software systems as well.
>Blindly copying highly expensive software development processes from Google or other super-unicorn companies like Apple or Facebook is likely a prescription for failure.
Yes, this can't be stressed enough. Out here with the little people like non-Google-employed me, there's almost a desperation to copy the things that Google and Facebook do, and it's often justified with "Well, Google, Facebook, or Netflix use it!"
A huge amount of tech fads over the last decade have been the direct result of Google's publications and public discussions. Google might need BigTable, but that doesn't mean we should all bury our SQL RDBMS and use a crappy open-source knockoff "based on the BigTable paper". More than likely, Google would've been happy to keep a SQL RDBMS.
Google has the engineering staff, including two extremely technical founder-CxOs with PhDs on which Google's initial technology was based, and the hundreds of millions of dollars necessary to do things The Google Way. They can rewrite everything that's slightly inconvenient and do a lot of cutting-edge theoretical work. They have the star power and the raw cash to hire the people who invented the C language back in the 70s and ask them to make a next-gen C on their behalf.
Google has the technical and financial resources to back this type of stuff up, provide redundancy, put out fires, and ensure a robust deployment that meets their needs. They keep the authors of these systems on-staff so they can implement any necessary changes ASAP. In many cases, Google is operating at a truly unprecedented scale and existing technologies don't work well, so that theoretical cutting-edge is necessary for them.
None of those things are going to be true for any normal company, even other large public ones. Google's solutions are not necessarily going to meet your needs. Their solutions are not necessarily even good at meeting their own needs. Stop copying them!
I'm so. sick. of sitting through meetings that have barely-passable devs making silly conjectures about something they heard Google/Facebook are doing or that they read about on a random hyper-inflated unicorn's engineering blog.
You want your stack? OK, here it is: a reasonably flexible, somewhat established server-side language (usually Java, .NET, Python, Ruby, or PHP; special applications may need a language that better targets their specific niche, like Erlang), a caching layer (Redis or Memcached) that's actually used as cache, NOT a datastore, a mainstream SQL database with semi-reasonable schemas and good performance tuning, and a few instances of each running behind a load balancer. That's it, all done. No graph databases, no MongoDB, no quadruple-multi-layered-massively-scalable-Cassandra-or-Riak-clusters, no "blockchain integration", no super-distributed AMQP exchanges, ABSOLUTELY NO Kubernetes or Docker (use Ansible), none of that stuff.
Just sit down and get something done instead of wasting millions of your employers' dollars trying to make yourself feel important. If you do need an AMQP exchage, bolt it on after the fact to address the specific issue at hand, once you know why you need it, and make sure you know what you're doing before you put it in production (it's apparently difficult to grasp that AMQP servers are NOT data stores and that you can't just write directly to them and wait for some worker to pick it up if you care about data integrity).
Don't start a project with the primary intention of making a convoluted mess; let it get that way on its own. ;)
Wikipedia can be really helpful when you are already at an advanced level in math - to quickly look up definitions.
For learning the fundamentals, it is completely useless. Too many cross-references, which you ALL have to understand in math.
https://proofwiki.org/wiki/Main_Page
Proofwiki keeps the proofs honest and should be readable like a textbook, as long as you look up the right things. Proofwiki will actually tell you which proofs you have to understand before the one you are currently looking at. I don't how complete it is, though.
Its strictly about proofs, though. Actually calculating stuff is probably more important to you.
The problem with "following a course in linear algebra" is that linear algebra is where math students learn reading and writing proofs, which takes practice - just like learning programming from a book vs. actually programming.
In order to get anywhere here, you need to apply your knowledge.
Due to an eyesight problem I run everything on dark background including theme, editors and web. I'm not sure if this makes me sleepy, but it is possible to try it. The best firefox addon for dark browsing is > https://addons.mozilla.org/en-US/firefox/addon/dark-backgrou...
I had a philosophy professor once who was very upset after she'd graded our papers on Plato's Republic.
She gripped the lectern and looked at the floor for a few seconds sadly.
She looked up.
"What happened here, guys? You're all so smart. This was a real let-down. No, the Republic is not a sacred text. No, we're not here to worship it. But there's also such a thing as employing the critical spirit in the wrong way. We're here to understand this book, to engage with its ideas seriously, not to tear it apart without thought to feel superior."
"Again, this is not an object of worship. But this book has been preserved for 2500 years by human beings, most of whom had to copy each page by hand. A long chain of brilliant people from across generations worked to put this paperback in your hand. They did this in part because they thought it was worth the effort of preserving it for you. If, after a minute or two of thought, we find a glaring flaw that makes Plato looks like a blithering idiot, it would be wise to examine our critique in a spirit of humility. Without humility and charity, it's impossible to learn anything."
This idea is equally valuable outside the context of interpreting philosophical texts.
Everything she said is unfashionable, not only in academia but in public life. Political entertainers earn their keep by deliberately distorting their opponents' arguments with easy mockery. A lot of social media reward mindless criticism.
But the most productive, insightful online communities have some element of exclusion and some punishments (karma, banning, etc.) for repeated violations of the principle of charity.
Then there's the legal issue. People are not going to accept the idea of "you can't sue me, the AI decided", and so a not perfect model is not going to be deployed in any field of consequence, because the idea that you can offload liability on models is not going to fly, and is ethically reprehensible in my view and in the view of most serious people working in regulated industries. It actually introduces new legal issues, a jury is far less likely to side with an AI than it is with a person who simply made a mistake. There are numerous social reasons why the liability concerns here are legitimate and concerning and actually different for companies than if a human had been making the decisions.
Therefore, the models have to be perfect, or they have to be checked by hand by experts at every step (in which case, there is no return on investment, you haven't really automated anything).
That's the classic conundrum of "I may as well have done it myself given I had to check that code line by line anyway" I see in my own work, and the same is going to apply for non-perfect models in any field of sufficient consequence that there is a hope for profitability (and usually, these fields being consequential enough for potential profitability also implies a potential for liability).
Anybody that works as a programmer in finance/energy/healthcare/government or other parts of the "real economy" could tell you this years ago. The perception that AI could be used to automate significant work was dead pretty shortly after arrival. The only industry that continues on with the charade is the industry with a vested interest in selling and making dubious promises about AI products.
That doesn't mean I don't think LLMs are cool or that they are totally useless. For endeavours that were limited in their profitability anyway, it may very well have some use cases. I just don't see a viable business model for any large companies except Nvidia or AMD or Broadcom and others making money selling data center equipment. Maybe ads and agitprop, but that seems to be a sort of parasitic relationship going on with social media and the internet and one wonders how long it can continue before canabalizing itself.