More on infrastructure composition software is an abstraction above that.
Is the unit of composition a rack (chunk), a server (smaller chunk), or a blade (smallest chunk)? In what I think of as classic systems architecture you've got a 'store' (storage), 'state' (memory), 'threads' (computation), and 'interconnect' (fabrics). In the 90's a lot of folks focused on fabrics (Cray, Connection Machine, Sun, etc) somewhat on threads (compute blades), and state came along for the ride. How these systems were composed was always a big thing, then along came the first Beowulf clusters that used off the shelf motherboards (a "chunk" of threads/state/store) with a generic fabric (Ethernet). Originally NASA showed that you could do some highly parallel processing on these sorts of systems and Larry and Sergei at Stanford applied it to the process of internet search.
Collectively you have a 'system resource' and with software you can make it look like anything you want. When you do compute with it, its performance becomes a function of its systems balance and the demands of the workload. Its all computer sciencey and yes there is a calculus to it. This isn't something that most people dive into (or are even interested in[1]) but it was one of the things that captured my imagination early on as an engineer. I was consumed with questions like what was the difference between a microprocessor, a mini-computer, a workstation, and a mainframe? Why do they each exist? What does one do that they other can't? Things like that.
[1] At Google I worked in what they called 'Platforms' early on and clearly most of the company didn't really care about the ins and outs of the systems bigtable/gfs/spanner/etc ran on, they just wanted APIs to call. But they also didn't care about utilization or costs. By the time I left some folks had just figured out (and one guy was building his career on) the fact that utilization directly affected operational costs. They still hadn't started thinking about non-uniform rack configurations for different workloads.
Is the unit of composition a rack (chunk), a server (smaller chunk), or a blade (smallest chunk)? In what I think of as classic systems architecture you've got a 'store' (storage), 'state' (memory), 'threads' (computation), and 'interconnect' (fabrics). In the 90's a lot of folks focused on fabrics (Cray, Connection Machine, Sun, etc) somewhat on threads (compute blades), and state came along for the ride. How these systems were composed was always a big thing, then along came the first Beowulf clusters that used off the shelf motherboards (a "chunk" of threads/state/store) with a generic fabric (Ethernet). Originally NASA showed that you could do some highly parallel processing on these sorts of systems and Larry and Sergei at Stanford applied it to the process of internet search.
Collectively you have a 'system resource' and with software you can make it look like anything you want. When you do compute with it, its performance becomes a function of its systems balance and the demands of the workload. Its all computer sciencey and yes there is a calculus to it. This isn't something that most people dive into (or are even interested in[1]) but it was one of the things that captured my imagination early on as an engineer. I was consumed with questions like what was the difference between a microprocessor, a mini-computer, a workstation, and a mainframe? Why do they each exist? What does one do that they other can't? Things like that.
[1] At Google I worked in what they called 'Platforms' early on and clearly most of the company didn't really care about the ins and outs of the systems bigtable/gfs/spanner/etc ran on, they just wanted APIs to call. But they also didn't care about utilization or costs. By the time I left some folks had just figured out (and one guy was building his career on) the fact that utilization directly affected operational costs. They still hadn't started thinking about non-uniform rack configurations for different workloads.