FHE solves privacy-from-compute-provider and doesn't affect any other privacy risks of the services. The trivial way to get privacy from the compute provider is to run that compute yourself - we delegate compute to cloud services for various reasonable efficiency and convenience reasons, but a 1000-fold less efficient cloud service usually isn't competitive with just getting a local device that can do that.
The parent point isn't that shitty code doesn't have defects but rather that there's usually a big gap between the code (and any defects in that code) and the actual service or product that's being provided.
Most companies have no relation between their code and their products at all - a major food conglomerate will have hundreds or thousands of IT personnel and no direct link between defects in their business process automation code (which is the #1 employment of developers) and the quality of their products.
For companies where the product does have some tech component (e.g. refrigerators mentioned above) again, I'd bet that most of that companies developers don't work on any code that's intended to be in the product, in such a company there simply is far more programming work outside of that product than inside of one. The companies making a software-first product (like startups on hackernews) where a software defect implies a product defect are an exception, not the mainstream.
If a LLM (or any other tool) makes so that team of 8 can get the same results in the same time as it used to take a team of 10 to do, then I would count that as "replaced 2 programmers" - even if there's no particular person for which the whole job has been replaced, that's not a meaningful practical difference, replacing a significant fraction of every programmer's job has the same outcomes and impacts as replacing a significant fraction of programmers.
When hand-held power tools became a thing, the Hollywood set builder’s union was afraid of this exact same thing - people would be replaced by the tools.
Instead, productions built bigger sets (the ceiling was raised) and smaller productions could get in on things (the floor was lowered).
I always took that to mean “people aren’t going to spend less to do the job - they’ll just do a bigger job.”
It's always played out like this in software, by the way. Famously, animation shops hoped to save money on production by switching over to computer rendered cartoons. What happened instead is that a whole new industry took shape, and brought along with it entire cottage industries of support workers. Server farms required IT, renders required more advanced chips, some kinds of animation required entirely new rendering techniques in the software, etc.
A few hundred animators turned into a few thousand computer animators & their new support crew, in most shops. And new, smaller shops took form! But the shops didn't go away, at least not the ones who changed.
It basically boils down to this: some shops will act with haste and purge their experts in order to replace them with LLMs, and others will adopt the LLMs, bring on the new support staff they need, and find a way to synthesize a new process that involves experts and LLMs.
Shops who've abandoned their experts will immediately begin to stagnate and produce more and more mediocre slop (we're seeing it already!) and the shops who metamorphose into the new model you're speculating at will, meanwhile, create a whole new era of process and production. Right now, you really want to be in that second camp - the synthesizers. Eventually the incumbents will have no choice but to buy up those new players in order to coup their process.
And 3D animation still requires hand animation! Nobody starts with 3D animation, the senior animators are doing storyboards and keyframes which they _then_ use as a guide for 3D animation.
The saddle, the stirrup, the horseshoe, the wagon, the plough, and the drawbar all enhanced the productivity of horses and we only ended up employing more of them.
Then the steam engine and internal combustion engine came around and work horses all but disappeared.
There's no economic law that says a new productivity-enhancing programming tool is always a stirrup and never a steam engine.
I think you raise an excellent point, and can use your point to figure out how it could apply in this case.
All the tools that are stirrups were used "by the horse" (you get what I mean); that implies to me that so long as the AI tools are used by the programmers (what we've currently got), they're stirrups.
The steam engines were used by the people "employing the horse" - ala, "people don't buy drills they buy holes" (people don't employ horses, they move stuff) - so that's what to look for to see what's a steam engine.
IMHO, as long as all this is "telling the computer what to do", it's stirrups, because that's what we've been doing. If it becomes something else, then maybe it's a steam engine.
And, to repeat - thank you for this point, it's an excellent one, and provides some good language for talking about it.
Maybe another interesting case would be secretaries. It used to be very common that even middle management positions at small to medium companies would have personal human secretaries and assistants, but now they're very rare. Maybe some senior executives at large corporations and government agencies still have them, but I have never met one in North America who does.
Below that level it's become the standard that people do their own typing, manage their own appointments and answer their own emails. I think that's mainly because computers made it easy and automated enough that it doesn't take a full time staffer, and computer literacy got widespread enough that anyone could do it themselves without specialized skills.
So if programming got easy enough that you don't need programmers to do the work, then perhaps we could see the profession hollow out. Alternatively we could run out of demand for software but that seems less likely!
Oh my, no. Fabrics and things made from fabrics remain largely produced by human workers.
Those textile workers were afraid machines would replace them, but that didn't happen - the work was sent overseas, to countries with cheaper labor. It was completely tucked away from regulation and domestic scrutiny, and so remains to this day a hotbed of human rights abuses.
The phenomenon you're describing wasn't an industry vanishing due to automation. You're describing a moment where a domestic industry vanished because the cost of overhauling the machinery in domestic production facilities was near to the cost of establishing an entirely new production facility in a cheaper, more easily exploitable location.
I think you're talking about people who sew garments, not people who create textile fabrics. In any case, the 19th century British textile workers we all know I'm talking about really did lose their jobs, and those jobs did not return.
How many worked in America and Western Europe and were paid a living wage in their respective countries, and how many of those 430 million people currently work making textiles in America and Western Europe today, and how many are lower paid positions in poorer countries, under worse living conditions? (Like being locked into the sweatshop, and bathroom breaks being regulated?)
Computers aren't going anywhere, so the whole field of programming will continue to grow, but will there still be FAANG salaries to be had?
i heard the same shit when people were talking about outsourcing to India after the dotcom bubble burst. programmer salaries would cap out at $60k because of international competition.
if you're afraid of salaries shrinking due to LLMs, then i implore you, get out of software development. it'll help me a lot!
It solely depends on whether more software being built is being constrained by feasibility/cost or a lack of commercial opportunities.
Software is typically not a cost constrained activity due to its typically higher ROI/scale. Its all about fixed costs and scaling profits mostly. Unfortunately given this my current belief is that on balance AI will destroy many jobs in this industry if it gets to the point where it can do a software job.
Assuming inelastic demand (software demand relative to SWE costs) any cost reductions in inputs (e.g. AI) won't translate to much more demand in software. The same effect that drove SWE prices high and didn't change demand for software all that much (explains the 2010's IMO particularly in places like SV) also works in reverse.
Is there a good reason to assume inelastic demand? In my field —- biomedical research —-I see huge untapped areas in need of much more and better core programming services. Instead we “rely” way too much on grad students (not even junior SWEs) who know a bit of Python.
Depends on the niche (software is broad) but I think as a whole for the large software efforts employing a good chunk of the market - I think so. Two main reasons for thinking this way:
- Software scales; generally it is a function of market size, reach, network effects, etc. Cost is a factor but not the greatest factor. Most software makes profit "at scale" - engineering is a capital fixed cost. This means that software feasibility is generally inelastic to cost or rather - if I make SWE's cheaper/not required to employ it wouldn't change the potential profit vs cost equation much IMV for many different opportunities. Software is limited more by ideas, and potential untapped market opportunities. Yes - it would be much cheaper to build things; but it wouldn't change the feasibility of a lot of project assessments since cost of SWE's at least from what I've seen in assessments isn't the biggest factor. This effect plays out to varying degrees in a lot of capex - as long as the ROI makes sense its worth going ahead especially for larger orgs who have more access to capital. The ROI from scale dwarfs potential cost rises often making it less of a function of end SWE demand. This effect happens in other engineering disciplines as well to varying effects - software just has it in spades until you mix hardware scaling into the mix (e.g GPUs).
- The previous "golden era" where the in-elasticity of software demand w.r.t cost meant salaries just kept rising. Inelasticity can be good for sellers of a commodity if demand increases. More importantly demand didn't really decrease for most companies as SWE salaries kept rising - entry requirements were relaxing generally. The good side of in-elasticity is reversed potentially by AI making it a bad thing.
However small "throwaway" software which does have a "is it worth it/cost factor" will increase under AI most probably. Just don't think it will offset the reduction demanded by capital holders; nor will it be necessarily done by the same people anyway (democratizing software dev) meaning it isn't a job saver.
In your case I would imagine there is a reason why said software doesn't have SWE's coding it now - it isn't feasible given the likely scale it would have (I assume just your team). AI may make it feasible, but that doesn't help the OP in the way it does it - it does so by making it feasible for as you put it junior out of uni not even "SWE" grads. That doesn't help the OP.
This could very well prove to be the case in software engineering, but also could very well not; what is the equivalent of "larger sets" in our domain, and is that something that is even preferable to begin with? Should we build larger codebases just because we _can_? I'd say likely not, while it does make sense to build larger/more elaborate movie sets because they could.
Also, a piece missing from this comparison is a set of people who don't believe the new tool will actually have a measurable impact on their domain. I assume few-to-none could argue that power tools would have no impact on their profession.
> Should we build larger codebases just because we _can_?
The history of software production as a profession (as against computer science) is essentially a series of incremental increases in the size and complexity of systems (and teams) that don't fall apart under their own weight. There isn't much evidence we have approached the limit here, so it's a pretty good bet for at least the medium term.
But focusing on system size is perhaps a red herring. There is an almost unfathomably vast pool of potential software systems (or customization of systems) that aren't realized today because they aren't cost effective...
Have you ever worked on a product in production use without a long backlog of features/improvements/tests/refactors/optimizations desired by users, managers, engineers, and everyone else involved in any way with the project?
The demand for software improvements is effectively inexhaustible. It’s not a zero sum game.
This is a good example of what could happen to software development as a whole.
In my experience large companies tend to more often buy software rather than make it.
Ai could drastically change the "make or buy" decision in favour of make. Because you need less developers to create a perfect tailored solution that directly fits the needs of the company. So "make" becomes affordable and more attractive.
That's actually not accurate. See Jevons paradox, https://en.m.wikipedia.org/wiki/Jevons_paradox. In the short term, LLMs should have the effect of making programmers more productive, which means more customers will end up demanding software that was previously uneconomic to build (this is not theoretical - e.g. I work with some non-profits who would love a comprehensive software solution, they simply can't afford it, or the risk, at present).
yes, this. the backlog of software that needs to be built is fucking enormous.
you know what i'd do if AI made it so i could replace 10 devs with 8? use the 2 newly-freed up developers to work on some of the other 100000 things i need done
Every company that builds software has dozens of things they want but can't prioritize because they're too busy building other things.
It's not about a discrete product or project, but continuous improvement upon that which already exists is what makes up most of the volume of "what would happen if we had more people".
A consumer-friendly front end to NOAA/NWS website, and adequate documentation for their APIs would be a nice start. Weather.com, accuweather and wunderground exist, but they're all buggy and choked with ads. If I want to track daily rainfall in my area vs local water reservoir levels vs local water table readings, I can do that, all the information exists but I can't conveniently link it together. The fed has a great website where you can build all sorts of tables and graphs about current and historical economic market data, but environmental/weather data is left out in the cold right now.
- a good LIMS (Laboratory Information Management System) that incorporates bioinformatics results. LIMS come from a pure lab, benchwork background, and rarely support the inclusion of bioinformatics analyses on samples included in the system. I have yet to see a lab that uses an off-the-shelf LIMS unmodified - they never do what they say they do. (And the amount of labs running on something built on age-old software still in use is... horrific. I know one US lab running some abomination built on Filemaker Pro)
- Software to manage grants. Who is being owed what, what are the milestones, who's looking after this, who are the contact persons, what are the milestones and when to remind, due diligence on potential partners etc. I worked for a grant-giving body and they came up with a weird mix of PowerBI and a pile of Excel sheets and PDFs.
- A thing that lets you catalogue Jupyter notebooks and Rstudio projects. I'm drowning in various projects from various data scientists and there's no nice way to centrally catalogue all those file lumps - 'there was this one function in this one project.... let's grep a bit' can be replaced by a central findable, searchable, taggable repository of data science projects.
> A thing that lets you catalogue Jupyter notebooks and Rstudio projects. I'm drowning in various projects from various data scientists and there's no nice way to centrally catalogue all those file lumps - 'there was this one function in this one project.... let's grep a bit' can be replaced by a central findable, searchable, taggable repository of data science projects.
Oh... oh my. This extends so far beyond data science for me and I am aching for this. I work in this weird intersection of agriculture/high-performance imaging/ML/aerospace. Among my colleagues we've got this huge volume of Excel sheets, Jupyter notebooks, random Python scripts and C++ micro-tools, and more I'm sure. The ones that "officially" became part of a project were assigned document numbers and archived appropriately (although they're still hard to find). The ones that were one-off analyses for all kinds of things are scattered among OneDrive folders, Zip files in OneDrive folders, random Git repos, and some, I'm sure, only exist on certain peoples' laptops.
Ha - I’m on the other side of the grant application process and used an LLM to make a tool to describe the project, track milestones, sub-contractors, generate a costed project plan and all generate the other responses that need to be self-consistent in the grant application process.
Ditto for most academic biomedical research: we desperately need more high quality customized code. Instead we have either nothing or Python/R code written by a grad student or postdoc—code that dies a quick death.
> then I would count that as "replaced 2 programmers"
Well then you can count IDEs, static typing, debuggers, version control etc. as replacing programmers too. But I don't think any of those performance enhancers have really reduced the number of programmers needed.
In fact it's a well known paradox that making a job more efficient can increase the number of people doing that job. It's called the Jevons paradox (thanks ChatGPT - probably wouldn't have been able to find that with Google!)
Making people 20% more efficient is very different to entirely replacing them.
As a founder, I think that this viewpoint misses the reality of a fixed budget. If I can make my team of 8 as productive as 10 with LLMs then I will. But that doesn’t mean that without LLMs I could afford to hire 2 more engineers. And in fact if LLMs make my startup successful then it could create more jobs in the future.
Intent makes all the difference in the eyes of law.
If a jury of your peers decides that you configured that firmware to protect your data from thieves, then it is not a problem.
But if a jury of your peers doesn't buy your assertion of that, and instead becomes convinced that you applied that (otherwise reasonable and legitimate) feature with the intent to destroy evidence of a crime so that it doesn't reach the court, then setting it up was a felony.
Your IT system may need to handle entries linked to people who don't have names, for example, recording that some treatment was given to a baby who died soon after birth before being given a name.
I don't get why I would choose a dataclass in cases where I've already decided that an ordinary tuple would be a better fit than a normal class (i.e. "anywhere you're tempted to use a namedtuple")
To me, namedtuples are a convenience to give a nicer syntax than ordinary tuples in scenarios where I don't want the overhead of having to store a copy of all the keys with every object, like a dict would. Dataclass seems to be even more stuff on top of a class which is effectively even more stuff on top of a dict, but all the use cases of namedtuples are those where you want much less stuff than an ordinary class has. And I don't want to have to define a custom class just as I often don't define a custom namedtuple in my code but use the one the database driver generates based on the query, which is a very common use case for namedtuples as efficient temporary storage of data that then gets processed to something else.
Those farmers with second jobs would need protection from local megafarms who are outcompeting them, not other countries - it's not about location, it's about size of the operation.
The general principle is that "pipelines" impose a restriction where the errors of the first step get baked-in and can't effectively use the knowledge of the following step(s) to fix them.
So if the first step isn't near-perfect (which ASR isn't) and if there is some information or "world knowledge" in the later step(s) which is helpful in deciding that (which is true with respect to knowledge about named entities and ASR) then you can get better accuracy by having an end-to-end system where you don't attempt to pick just one best option at the system boundary. Also, joint training can be helpful, but that IMHO is less important.
In this context money is just a unit of measurement. If we say that we need more of a particular kind of infrastructure and reduce a particular kind of activity, etc, then the discussion requires being able to say how much of those (many!) things and we can quantify all of those in terms of dollars.