This is how the industry has always worked. The 486SX was a 486DX with the FPU burned off with a laser. Initially this was to make use of parts with a defective FPU, but parts with perfectly good FPUs were often disabled to fulfil demand.
CPUs are produced to a single design within a processor series. The ones with no faults are clocked the fastest and have all their features enabled. Parts with lots of flaws are sold off as crippled budget processors. AMD's triple-core processors seem a bit strange until you realise that they're a quad-core part with one faulty core.
Overclockers have known about this for years, identifying countless 'good batches' of processors and GPUs that were perfectly capable of being clocked faster, but were sold as cheap parts to fulfil demand.
If this seems like a con, then the whole IC fabrication industry is a con - artificially crippling ICs has been standard practice for decades.
This is different. In order to be upgraded via software, the chip has to be able to function at the upgraded specs. So Intel can't take a broken batch and sell them in this type of machine anymore; they have to put the good one in no matter what.
That's what makes it seem arbitrary and annoying. When you buy a low-spec chip, you get it cheap because it could be defective. But when you buy this, it's perfectly functional, just artificially limited. You aren't helping Intel increase their yield and getting some savings for helping them -- you are just being fucked because they feel like fucking you.
I hated Intel for a long time, but then they started making good products and contributing drivers to Linux... but if they keep this sort of thing up, that goodwill is all going to erode and AMD will have to opportunity to get the enthusiast market again. (Their "we'll sue you for using the HDCP crack" is similarly goodwill-eroding, especially because there is nobody they can even sue.)
I'm genuinely curious - why do you find this to be a problem? Isn't the decision to upgrade (or not) completely up to you? How is this any different than enabling a feature set on a cisco router with a license key, or adding users to an exchange server. In both of those cases the systems are already capable of performing the function. Are you opposed to all of these classes of 'pay to enable' functionality?
I am not renting the privilege of using my computer from Intel; I purchased a piece of hardware they happen to produce. I own it now, and I shouldn't have to maintain a relationship with Intel any more than I should have to continue paying fees to Honda for the car I own outright.
Corporations are renting the privilege of using Exchange from Microsoft because they want a corporation to stand behind their email system. If they didn't want to maintain a relationship with a major corporation, they could just as easily have done it with open-source software.
Cisco is entitled to sell me more sophisticated software, but not to prevent me outright from using the hardware I bought the way I want to with my own software. And they don't. My company wants features that would be too expensive with a unified Cisco solution, so we run Cisco phones flashed with custom open-source firmware connected to a Trixbox.
This is just simple price discrimination. They could just add $50 to the price of each chip and then people who didn't want improved performance pay it anyway. This way those who want it pay for it, those who don't don't.
But this is no DRM-ful app store. The upgraded computer will run the same instruction set, just faster. The mechanism to disable the bigger cache until a code is entered inhibits nothing people would be doing with their Intel CPUs today. Perhaps the homebrew microcode scene suffers, but that's a niche.
From a technical perspective, you're absolutely correct. This is just the logical next step in what's been going on for 20 years.
But from a consumer's perspective, this is a big shift. It's taking what was once an open secret in the industry and shoving it right in the faces of consumers. And I wouldn't underestimate the backlash that can come from something like that.
Its still nothing new. IBM was doing this with mainframes in the '70s. The mainframes they installed typically had more horsepower than your were licensed for. When you needed more power, the IBM rep came over with an impressive suitcase of equipment, locked himself inside your server room and simply flipped a switch and then ran some diagnostics for an hour or so to further the illusion.
So you're saying that IBM got their representitives to do more than necessary to cover up the fact that the upgrade as just a flip of a switch? That they were conscious of what their customers would think - business customers that are probably far more understanding of this type of arrangement than the average consumer?
The fact that we know it's been going on a while means nothing to the average consumer. Crippling of CPUs used to be magic that happened at the chip fabs that only people connected to the industry knew about. Now consumers are being openly told that their chips have been crippled in software, and if they pay more, that magical switch will be flipped.
I don't know if there'll be a big backlash or not, but this is not business as usual. Intel's small step forward has suddenly made the pratice visible.
AFAIK mainframe business still works exactly like this.
And I see no problem with it.
It is good on two fronts.
1. If you're savvy you can hack the chip thus getting more for less (Premium on your ability).
2. If you're not savvy then you can buy exactly what you're after at price point that is acceptable to you (thus ensuring a proper price/performance).
People feel cheated since they didn't get more than what they paid for? I always assumed that trade happens when two parties reach a level where both "agree" that they get what they expected out of the deal.
There is no need to go all ape on Intel -> IC manufacturing is a cutthroat business with insane risks and low margins. If Intel can profit from different people willing to pay more for the "same" thing (and yes for anyone that knows and understands Amdahl's law its perfectly clear that within a microchip generation difference between high and low end is mostly superficial - regarding to system performance).
As recently as five years ago, IBM would do a mainframe deal sort of like this: You get X MIPS available, but only YY% of those MIPS will be turned on with the option of turning on additional computing power during the period of the deal. There was no pretense that the hardware wasn't already present.
I don't think the average consumer has a clear enough understanding of how a computer works that they would automatically view this in a negative light.
But is this really a problem? I can buy a quad-core CPU restricted to 1.33GHz on the cheap and 6 months down the line I can put down a little more cash to get it unlocked to a 2.00GHz.
Why is this a bad change? Currently if I buy a 1.33GHz and want to upgrade later to a more powerful one, I'm buying the second processor and dropping a lot more money than I'd be putting down to unlock more processing power of my existing CPU.
This would be like selling all Kindles with 3G built in. You can save say $10 off the activation of 3G if you buy it already activated, or you can wait and pay a little more in the long run, but less up front.
I don't see this as a bad thing, but I understand why people would misunderstand as to why it is.
If you put a restricted CPU in a budget computer, it's the same as putting a flawed under-clocked CPU in a budget computer save that you can upgrade it later for less money than replacing your CPU to the equivalent. (Edit: note that it also puts less strain on CPU manufacturers as they're not producing 2 CPU's whenever someone buys a computer to upgrade, which in the long run would probably mean more quality control could be implemented in the production process meaning less flawed CPU's)
> I can buy a quad-core CPU restricted to 1.33GHz on the cheap and 6 months down the line I can put down a little more cash to get it unlocked to a 2.00GHz.
Actually, you're buying a 2.00Ghz part on the super cheap, you just aren't allowed to use all of what you bought under this kind of payment scheme. This brings up two downsides to this:
1) Consumer's know that the thing they purchased is not really under control...many people want to be able to do whatever they want with what they purchased. This is basically the same as a pay-per-use license like with software today...which most people absolutely hate.
2) It tells the market that the 2.00Ghz part is way overpriced. It's a powerful signal of how bad of a value the company is offering since you actually can but the 2.00Ghz part at the lower price, it's just artificially broken. How long till crackers figure out their way past such a lockout scheme?
This isn't how pricing works. Prices track value (in the abstract) and the market (in reality); unless the good we're discussing is sorghum, prices don't track costs.
If pricing did work this way, Photoshop Elements would have cost the same as Photoshop.
> Prices track value (in the abstract) and the market (in reality)
I think that's what I said. In the abstract, consumers will feel like they are getting gypped by paying for the fully unlocked part knowing that the exact same thing is available at a cheaper price -- part of the consumer's mindset will be "we know chip manufacturer A has to make a profit, but they're clearly making a profit on the cheaper part or they wouldn't sell it, the premium part then is just a con-job and manufacturer A is just getting greedy.
This reduces the desirability of the premium part. It'd be like selling a 6-cylinder car and a 4-cylinder car, only the 4-cylinder is actually a 6-cylinder with two of the pistons turned off (and can be enabled by paying some premium and having the other two pistons turned back on -- don't think something like this hasn't already been thought of in the car industry). People will just buy the 4-banger and take it to their local ricer shop and have the other two turned on and a type-R sticker put on the back.
In reality, it'll reduce the price of the premium part as demand shifts to the cheaper (but exactly the same) alternative OR, if the price of the premium part drops so low as to be extremely close or at parity with the slower part, people will stop buying the lower priced part. Either way, in reality, they should have just sold one part and not wasted everybody's time.
I don't work in the industry, but I'd imagine some of the cheaper budget chips don't make profit on their own--they require the more expensive non-defective chips to break even.
It's even possible for the lower-priced chips to make a loss, as long as their volume is sufficient to pay for a large share of the development costs that make the high-margin chips possible.
If I may ask, why is sorghum your standard example? Just because the word "sorghum" sounds so sweet (and I agree), or did you have a transformative sorghum experience as a child?
Reminds me of a 100MHz AMD DX4 I had, circa 1996. I was handed a signal 11 whenever I tried to compile a certain C program.
After going over every inch of the source code and finding nothing wrong, I'd almost given up when I discovered the Sig 11 FAQ (http://www.bitwizard.nl/sig11/) Clocking the CPU down to 90MHz, no more crashes. Turned out my gray market dealer had sold me an overclocked 90.
Is the fact that Intel is testing the market for this a sign of fabrication processes having improved to the point where yields have outgrown the binning process?
Then again, here's a thought: This test marketing is being done with ancient-architecture Pentium chips. Perhaps this is a consequence of fabricating these chips with less-dense architectures on equipment built for the tolerances required by the latest generation. I don't know enough about the process to say for sure, but I could imagine this resulting in a significant improvement in yields compared to the Pentium's heyday (or to Core i7 chips today).
These chips are not "ancient architecture" Pentiums. It's because Intel branding is fucking insane. For example, the Core i3, i5, and i7 names are meaningless, except to vaguely suggest increasing performance: some low-power-draw i5 chips are put into laptops and given an i7 label.
As for this CPU, recently Intel revived the Pentium name and positioned it as a low-performance chip positioned about midway between the Celeron line and the Core i3 line.
Well, the simple answer is that is is a con. Disabling hardware capabilities that have already been confirmed by QA is nothing less than an attempt to subvert the laws of supply and demand. The chipmakers are artificially restricting supply of high-end parts in order to maintain the pricing structure they want. They can only get away with it because they are a duopoly, and Moore's Law means that the market forces that could eliminate this practice can't ever catch up.
Don't get me wrong- I get why people are mad. It comes off as racketeering. And I certainly wouldn't want to drop $50 on this. But how is this all that different from software companies that do the same thing?
Your basic subscription to Basecamp, your Photoshop trial, your Windows 7 Home Edition, the Keynote trial that comes with your Mac, your iPhone... they're all capable of doing a lot more (for no extra cost to the company behind it), and you can upgrade for a cost. Another good example that the article mentions is upgrades in video games. This is a pretty common practice for software, so why not hardware?
I don't understand why people are mad, but I do understand why they'd be surprised. The value of the processor is physical. It's protected by property law. At one point of the manufacturing, the processor is purposefully damaged and devaluated to meet market demands. The force of the market is to take goods, damage them, and sell them as inferior goods. That's not what you would expect an otherwise "reasonably functioning" manufacturing economy to incentivize. Taken out of context, some Chinese school child might be learning "in the great American Capitalist system, goods are damaged on purpose to fulfill the optimum of their system: farmers are paid not to plant anything, and chip manufacturers damage their own goods on purpose. Why do they do this? Because the rules of American Capitalism dictate it.
Students in intro economics courses (in American universities) already are taught this. Most microeconomics courses will have a lesson about how during the Depression, farmers were paid to destroy crops in the field, to reduce the supply of them and raise the price of commodities. Or how today, farmers are incentivized to convert their crops to non-productive uses like biodiesel to keep the price high.
The formation of Standard Oil in the 1880s was a reaction to ruinous overproduction of oil, and its effect was to control and restrict the flow of oil onto the market to maintain prices. Same with the OPEC cartel today.
That people don't already know this says more about the educational system than the economic system. Then again, the educational system is itself a monopoly that seeks to restrict the flow of information so that it keeps itself relevant.
The market demand is for a cheaper processor. That the vendor is able to sell high end ones as lower ends (and volontarilly damaging them in the process...) and still getting a profit is a sign that they are artificially increasing prices by this trick and others, and while in a true free/perfect/unbiased/whatever market this kind of economical evil behavior (from the whole world/humankind point of view) would soon disappear thanks to the great idealized market, there is no such thing in the real world, especially not in the processor field, especially not in the x86 processor field.
So the little Chinese kid will indeed get a kind of wiseness by learning that "in the great American Capitalist system, goods are damaged on purpose to fulfill the optimum of their system: [...] chip manufacturers damage their own goods on purpose. Why do they do this? Because the rules of American Capitalism dictate it."
Because for the most part, that's the most unbiased view you can get.
(But the rules of American Capitalism applies more and more to China too, so the little Chinese kid will either don't be taught that or learn soon enough that he will be part of the very same system regardless... (Not saying that real attempts to organize society by communism was better anyway))
"artificially increasing prices" - if the market, which has heavy competition, will bear the price increase, then there is nothing artificial about it - prices are not a fixed amount - intel is free to sell every single chip at a different price if the market will bear it.
Think of the logic. You want to produce a low end and a high end chip - because that's what the market is eating up. It's cheaper to have one fabrication process rather than two.... and suddenly, instead of busted, inferior chips being sold as lower end chips, we have the same chips in the same market niche, with the possibility of a simple software update to enable the high end features. It might smell funny at first, but as long as they are up-front about it, it's just business.
Do you honestly think that Intel is subject to "heavy competition" in the desktop CPU market?
They have exactly one competitor (AMD), and because Intel is so much bigger and has so much more money, they always have newer and better fabs. The only ways AMD can be competitive with Intel are to be really daringly clever with their chip design (Athlon 64, and hopefully the upcoming Bobcat and Bulldozer cores), sell their chips at lower profit margins, take advantage of Intel's mistakes (Itanic vs Athlon 64), and to pick their battles wisely (AMD has never been able to field a whole product lineup that is competitive across the board). The barriers of entry to that market are so high that even AMD can't fully surmount them, and AMD definitely can't gain ground or even maintain solvency by doing what Intel does but slightly better.
But it doesn't cost Intel more to produce the high-end chip. In fact, maintaining two fabrication lines will cost more than producing a single chip and disabling the premium features. So I don't think "artificially increasing prices" applies here.
In conclusion, from the manufacturer point of view, there is more profit to be made by selling a product at multiple profit margin levels using the same manufacturing process.
But at the same time, doing only low profit margin manufacturing might be: or not profitable; or not enough to drive and push development of new technology.
If you buy a crippled processor and hack it, I'd guess that Intel has no cause to sue you, precisely because you own that processor. But they don't have to make it easy to hack.
Just because it has a restriction built in doesn't necessarily mean it comes with an EULA, or that finding a way to enable or otherwise modify the hardware to your whim is illegal in any way (although I'm sure Intel would take a run at the first grey-market unlocks)
As long as they are up front about the cost & features of the product I'm buying I don't care. As a consumer, I don't like this practice, it smells bad - but it's supply and demand. Intel can price their chips however they want, it's up to them to price them in a way that's the most profitable for them - it's not like they have no competition.
Modern processors could be considered software as much as they're hardware.
Further, it delivers what the customer expects. It just turns out that Intel nicely provided additional functionality that you can unlock if you pay for your part of the R&D cost of it (which is the real cost of processors).
For all we know, every modern appliance is counting down the days until the warranty expires at which point it self immolates, forcing you to buy another.
That isn't the case in this situation, though. The expectations of the device were entirely up front and this was more of an "aha!".
As an aside, my auto has a built in system that monitors car telemetrics, has GPS functionality, a built in cell phone and "lojack" system, etc. I paid for this as a part of the car. It is completely useless the moment I stop paying a monthly service fee.
Short excerpt:
Even at speeds of up to 40 MPH on the runway, the attack packets had their intended effect, whether it was honking the horn, killing the engine, preventing the car from restarting, or blasting the heat. Most dramatic were the effects of Device Control packets to the Electronic Brake Control Module (EBCM) — the full effect of which we had previously not been able to observe. In particular, we were able to release the brakes and actually prevent our driver from braking; no amount of pressure on the brake pedal was able to activate the brakes. Even though we expected this effect, reversed it quickly, and had a safety mechanism in place, it was still a frightening experience for our driver. With another packet, we were able to instantaneously lock the brakes unevenly; this could have been dangerous at higher speeds.
This is quite normal in high-end kit; If you buy an HP SuperDome for example with half the CPUs they'll probably actually ship you a fully loaded one with half the chips turned off (still for the price of exactly what you ordered). Then when you want to upgrade you don't have to take any downtime, you just pay them the difference and they supply you with the codes to unlock it. IBM do this with mainframes all the time.
It seems a little clearer in that case, because mainframe purchases are often pretty explicitly "licenses" rather than "purchases", with an actual contract negotiated between the two companies. IBM used to even come and take back the machine if you stopped paying the annual fees, because you didn't really own it. Consumer hardware is more typically a "sale", though, with no real contract-negotiation phase, and both parties' obligations completed after the sale, except for any warranties.
Couldn't someone theoretically figure out how to enable these features without paying intel? I suspect if intel starts doing this people are going to figure out ways around this.
Practically guaranteed someone will, in no time flat, figure out how to unlock these, and that knowledge will be spread far and wide. There will be a run on the stores for the cheap versions of the chips for this reason, mostly for the home/small shop busines, and then Intel won't do it anymore.
This actually sounds pretty good to me, at least in theory.
Upgrading the CPU is quite costly - you have to take out the old CPU and replace it altogether with a new one. With HDDs you can put the old one in an external enclosure and get some use from it (if the reason you're replacing it isn't failure), and with RAM you might have slots free, so you can keep using the old stuff in addition to the new.
Replaced CPUs tend to be pretty useless. The socket type changes all the time, so you have to eBay it off. Laptops are worse, the CPUs are often soldered on or you can't find replacements.
The only real problem I foresee is that the upgrades are likely going to be of limited use. Hyperthreading gives you a modest boost, anything more than that (clock speed, disabled cores, cache) will drive up power consumption and therefore thermal dissipation. Computers upgradeable in this way will have to contain cooling systems that can cope with the extra heat, even if 90% of people will never upgrade. Might still be feasible for the very low end (Atoms - especially as they're so slow even "normal" people might upgrade) or as CPUs become increasingly modular (either double your CPU or GPU cores in the same package, but not both).
I ♥ AMD when it comes to pricing. Their pricing ratios seem to reflect the realities of production yields rather than market segmentation, and seem more closely correlated with performance. (Desktops; haven't looked at server CPUs lately).
This already happens with cars. For example, the difference between the BMW Mini One and Cooper versions is basically just a different engine map on the CPU, with the former restricted in power and the latter sold at a higher price http://www.newmini.org/content/mini_jan02.htm.
Deleting a certain gene in mice can make them smarter by unlocking a mysterious region of the brain considered to be relatively inflexible, scientists at Emory University School of Medicine have found.
We already do this in several industries, so it's nothing to get that upset about. However my larger worry here is that the end-users will now think that software can upgrade their hardware. That alone opens up huge doors in security - users will inadvertently download some tracking program that claims to "upgrade their processor" under the assumption that it does the same thing that intel is doing. Good for computer repair companies and spyware companies, bad for the end user.
* Nvidia intentionally capping floating point precision on 1/8th on their GTX 480 cards, since they cost a fraction of their Tesla cards. Both based on latest Fermi architecture.
* In automotive industry: two otherwise identical cars, but with different versions of engine microcontroller driver, can have many thousands of dollars difference in price.
well, that risk is exactly why they're testing the idea out in low-end markets first - so the people who would break the security for this probably don't even care for this laptop anyways.
They'd probably argue you're circumventing the copyright protections on the portion of the chip you didn't pay to access, just as circumventing DRM for pre-installed DLC for a game would be a DMCA violation. Anybody's guess what the courts would say about that, that is to say, I'm not saying I agree with this argument, just that I think it would be good enough to get someone selling $25 unlocks into court with a decent chance of serious losses.
Since when do you make a copy of those parts of the chip to use them?
Copyright is about the right to restrict copying - the clue is in the name. The software case is fundamentally different, because your computer must make a (temporary) copy of the software in order to use it.
You don't make a copy, you have a copy. The DMCA applies to such works as well, at least as far as the law is concerned. This is part of why the DMCA is so offensive, the "anticircumvention" clause grotesquely expanded copyright law well beyond the act of mere copying. The act of circumvention becomes illegal, regardless of whether it even involves you "making a copy" in the eyes of the law. The act of playing a DVD on Linux by hacking past the encryption is illegal, even though that is not usually considered "making a copy" in the eyes of the law. (That's also something that has been fought over but I'm not convinced has been settled.)
The DMCA bans circumventing effective technological measures intended to prevent copying. The technological measure you'd be circumventing here is one intended to prevent you from using a feature of the hardware, not prevent you from copying a copyrighted work. The DMCA does not appear to apply.
(On the other hand, if they designed it such that you needed to upload a piece of (copyrighted) microcode to the CPU on each boot, then that could well bring it within the remit of copyright law. In that case, if you wanted to produce a third-party version, you'd have to write your own "clean room" version of the necessary microcode, which seems like a pretty high hurdle).
Did you actually look at the document I linked to? It explicitly listes the kind of works the DMCA applies to. And none of the things listed says "CPU features" or anything that could be interpreted to mean that by any stretch of imagination.
Oh, but it can. Imagination can stretch pretty damn far. All I'm saying is that I can "stretch" far enough to get you into court without getting your case thrown out immediately, and that's enough to chill anybody considering trying this gambit.
Yup - which is why the unlock will just end up floating around out there for free, and the concept will just fall by the wayside as not market-worthy. Everyone and their dog will buy cheap chips and crank up the volume.
Yeah, don't try this in the USA. Fortunately it's just software, so you can sell it from anywhere as long as the USA doesn't block your advertising or payment processing.
DEC used to do this with their hardware, they would cripple an otherwise perfectly functional machine to a cheaper model in order to segment the market.
Of course DEC field engineers did not feel like waiting for the diagnostics to complete so they usually temporarily upgraded the machine to full spec to run their tools, then revert the changes before they left.
This was a funny little dance because some of their customers had clued in to the trick and would do the same thing after the engineers have left, upgrading the machine, only to downgrade it just before a field engineer would arrive.
On the plus side, if this is a software thing I fully expect it to be hacked.
My question is, how do you unlock/lock things on the processor via software? Does the CPU have a special instruction that the software triggers? I can't wrap my mind around it.
It seems most of the commenters here are overestimating the average consumer.
Excluding the friends I have in the tech industry, not a single one of my friends or family would be able to tell you the different between the CPU and software on their computer.
Besides, which is a better value proposition to the consumer (even if it is not a better value in reality)? Paying $50 for an online CPU upgrade that makes their processor appear 10% faster, or dropping $500+ on a brand new computer to get 40% faster?
IBM and Sun Microsystems (one of the first workstation vendors, then seller of servers, especially to finance; now an obscure division of Oracle, a commercial database company) used to do something similar with their biggest machines -- ship a fully populated high-end server, and then sell activation of CPU and memory resources after the fact. I don't know how this was enforced; I think it was mainly done at the OS or firmware level, not at the CPU level.
Intel has been doing this for years with 100% flawless chips. Crippling a chip is part of their business. Their yield management is fantastic such that they run entire wafers at the higher spec then choose what % to sell as lower-end SKUs purely based on demand. Perhaps 10 years ago this was driven by yield quality, but nowadays the number of chips out of a batch that can't pass the test for the top-of-the-line product in that batch are < 1%.
This seems like a great idea to me - sell the slower processor cheaply to the customer with the promise that they can upgrade it later via a simple code with no hardware changes necessary. The customer can get a performance boost later down the line when they have the need/money for it.
Intel gets the opportunity to make a little more money and the customer has a little future proofing - like I said seems like a great idea to me....
Funny related sidenote: the Chinese government was at one point so paranoid about buying processors from western companies that they initiated the creation of their own CPU architecture and began fabrication. They had a fear that outside CPUs could be programmed to be disabled (either remotely or under certain scenarios).
In any case, the processors were reportedly quite bad and the whole thing was eventually dropped.
Well, if it was SAAS (software as a service), no one would have complain about that, right?
With every paid service, one can get a list of features for a fixed price, and then pay more to "unlock" features that that very software can already do.
I wonder why is it easier for people to pay extra for extra in software but not in hardware.
Price differentiation between customer groups. It is not different conceptually from my downloadable software, which is free or $30 depending on whether you want a code which activates an if statement in the bitwise identical executable.
I could charge people $5 for the download and $25 for the code. Not a great idea, I think, but straightforward.
In a well-functioning market every vendor would be frantically scrambling to improve value minus cost, because the delta between the two is the upper bound on their profit. But being in a market with very few competitors and ridiculously high barriers to entry gives Intel the luxury of incurring minor costs to decrease the value they offer, without seeing anyone immediately undercut them on it. If you could run a chip fab in your garage, you wouldn't see price discrimination ploys succeeding.
It's like region locked DVD players. It's all the same kit off the same assembly line, just with one jumper lead attached. If you're Sony or whoever you want to be able to shift your inventory from country to country if that's where the demand is.
Any good business seeks to extract the maximum amount people are willing to pay. A single price for all buyers works against that. (I forget the economics term for it.) It's why coupons and discounts for senior citizens exist. Senior citizens and people willing to spend time clipping coupons often can't afford to pay regular prices, so businesses offer discounts where they still make a profit. Some profit is better than none. I'm sure Intel would love to produce high-end chips for people willing to pay, and low end chips for others. But, the cost of designing and building two separate but similar products would probably be much higher than simply crippling high-end chips. The alternative is that they only produce high-end (or low-end) chips and normal prices, driving low-end (high-end) customers to competitors, or they lower their high-end price and try to make up the difference in volume.
It does seem a little dirty to sell something to somebody, but restrict how they can use it. But, Intel isn't just selling a chunk of silicon. They are selling the work that went into shaping that silicon just so. They could have designed it much more cheaply if it didn't need to do so much. So, by buying the low-end chip, you aren't paying them for the extra effort they put in for their high-end customers. If you want the extra benefit, then you must pay for the extra work.
Most of the cost of the chip is in the R&D just like in software so the cheap chips cost about the same as the expensive ones on manufacture. For years, people have been overclocking cheap chips to make them run like the top end ones which shows there is little difference between them. Therefore price differentiation comes into to play so they can maximise their profit. It's similar to what Microsoft do with the different editions of Windows.
The _business model_ makes sense, because the sale of these unlocks are practically all profit, and thus they can sell at multiple price points. It's just not right in a moral sense to prey on uneducated consumers like this.
If I bought a netbook that had an Atom 450 in it, and I knew what the processor was and what it was capable of, and then I took delivery and found out later that it really is an Atom 450+ that can morph into a Xeon 7500 with a payment, there's no loss or detriment to me to not take the option. If anything it's a convenience.
Actually you bring up a good point where it's much better for the consumer because mobiles are so hard to upgrade. Joe consumer can buy the cheap laptop today and then pay an upgrade fee later to simply bump the processor up without having to buy a whole new machine. Sounds like a good deal for him.
Intel develops a CPU and looks at the market and sees that their chips is competitive at $150 price point. So they go and sell it at that price. But the production costs of a CPU is probably closer to $10. And because there is a market of people who do not want to pay $150 for a CPU, Intel cripples some features of the CPU and sells it for $100. This way those who are willing to pay more will pay more, and those who are not pay less.
CPUs are produced to a single design within a processor series. The ones with no faults are clocked the fastest and have all their features enabled. Parts with lots of flaws are sold off as crippled budget processors. AMD's triple-core processors seem a bit strange until you realise that they're a quad-core part with one faulty core.
Overclockers have known about this for years, identifying countless 'good batches' of processors and GPUs that were perfectly capable of being clocked faster, but were sold as cheap parts to fulfil demand.
If this seems like a con, then the whole IC fabrication industry is a con - artificially crippling ICs has been standard practice for decades.