"On Monday, Intel representatives confirmed reports in Russian-language newspapers that the American chip giant had hired approximately 500 engineers and related staff from the Elbrus Moscow Center of Sparc Technology, a state-sponsored design house in Russia. Some of the engineers will be hired away from Unipro, a related company. The new hires include Boris Babayan, Alexander Kim and Ivan Bolozov, said to be the architects of the E2K processor, a failed “Itanium-killer”."
"Russian designer could have been inspiration for Pentium name " - it's actually a fake news, Pentkovsky joined Intel several years after start of Pentium R&D.
I've heard that Pentium was named Pentium really late into it's dev cycle once the trademark office told Intel that trademarking model numbers wasn't going to be kosher.
It's on Russian, but shows how underpowered CPU (on paper!) can play with no lags resource hungry game.
So let's back on Intel's issues. Almost 20+ years ago Intel have a huge project that failed. Itanium based on VLIW architecture. Then Intel start pushing P4 architecture (Netburst) to the maximum. I think that they're hit 3GHz barrier and there isn't path beyond that.
And then PentiumM was born (from Israeli team) and soon CD, C2D.
> compile in 128-bit secure addressing mode with hardware access control to objects.
> In this mode, a pointer to data and functions takes 128 bits. It contains the 64-bit address of the object, its size (no more than 4 GB) and the position of the pointer inside the object. The mode enhances program memory control during execution.
Like Intels unused bound checking additions, it checks the starting offset and the ending offset. But without the HW hash table.
There was a video on YouTube (in russian) where lead developer from MCST confess that they were unable to use this technology in practice. The entire software stack should be ported to strictly eliminate any usage of pointer magic in every place. They poured years on this and only succeed on porting libc and few small base libraries.
There seems to be very little technical details available on the E2K processor's instruction set. Although Linux has been ported to run native on it, the E2K Linux distribution and sources don't appear to be publicly available.
Some people suggest this is security-through-obscurity connected with E2K's use in Russian military systems. I wonder if that is true.
The other issue, from what I understand, is the only C compiler for E2K is proprietary [1] (it is using EDG frontend). There is no support for E2K in GCC or LLVM. This also means their port of Linux couldn't be upstreamed since it can't be compiled with GCC.
That was well worth the read. Looks like the pipeline isn't exposed, which is what I'd expect for a VLIW in this role. They've got a some support for speculative loads but it seems pretty small so I'm not surprised they're having trouble adapting to general purpose code.
I'm not sure I understand what you mean by the pipeline being exposed. No processor I'm aware of other than the i860 exposed its pipeline, so I don't know what you mean by that here. Can you elaborate?
Lets say that a load from the L1 data cache takes two cycles. Take the following instruction sequence.
r1 = 3
r1 = load(r2) // will return 8
r3 = r1 + 2
In a typical processor the system will stall or reorder things so that the load completes before the third instruction so the final result in r3 is 10. In an exposed pipeline model the assignment to r3 will happen before the load instruction returns so the final value of r3 will be 5. I don't think this is very common outside VLIW systems but you have some famous ones that have used it there, like the Transmeta Crusoe. And the Mill guys are hoping to use it, basically, though they're calling it phasing and it's not exactly the same thing.
The floating point results are suspiciously good compared to the rest of the results. I wonder if they have some good VLIW instructions for floating point or it’s just an error in the spreadsheet.
Linpack is a floating-point benchmark focused on linear algebra operations. Coremark has some floating-point-heavy benchmarks in its suite, although a common criticism is that the footprint is too small to exercise the memory system.
So yeah, the "Maximum MFLOPS" row looks suspicious.
For HPC it is a huge handicap, though. Intel's 10nm process offers about 10 to 12 times higher transistor density than their 32nm process, for example. So, if you want to be fair, you should take this into account when comparing chips.
Just from a "proliferation and evaluation of technology" point of view, I find it nice that VLIW is getting a bit of a refresh (as an application processor, as opposed to GPUs or DSPs.) It's been quite some time since the Itanic sunk, and compilers have evolved... even if it doesn't go far, this is a nice opportunity to evaluate how well this can do in 2020.
We had NVidia keeping Transmeta's lineage alive in their automotive SOCs until quite recently, but now they've switched over to Arm's latest it seems. Too bad, the combination of code-morphing and an exposed pipeline VLIW design seemed like it could have been a good one though scheduling around variable delay memory accesses remains a challenge.
For those of you in Moscow, if you go to the Yandex Museum you can play with an Elbrus box. It's really neat! Pretty fast, and has an entire Linux environment running on it.
I'm surprised they aren't just going with ARM designed chips and moving to a Linux system. This would seem to give them scale, pricing, availability/compatibility and also if they support open source with financial contributions then a really strong level of application security out if the box.
State security is important, but corporate security is critical as a first line of defense, because without those big successful companies and their contributions in taxes, R&D and tech support the state loses flexibility and top talent. Then a state will lose ground in preservation of their independence.
Elbrus is a more interesting architecture and presents many benefits over existing architectures; allowing it to be lost would be bad for both technical and political reasons.
I don't know about political aspect, but I don't think Elbrus is interesting technically. Frankly it's just another dead-end VLIW architecture. VLIW just isn't interesting or even relevant for general purpose computing or even high performance computing. DSP market is the only niche where VLIW has found meaningful usage, and even there, the advantage is not huge by any means.
I have worked with MCST people (though at this point it's very long time ago). My view on Elbrus is that it's kept alive by politics, rather than technical merits.
The core challenge of the modern high-performance computing is the memory / cache latency - i.e. whichever architecture that can generate the most number of outstanding cache misses at all levels of cache hierarchy as quickly as possible will perform the best.
Between superscalar and SIMT (and lots of SIMD), VLIW has no design space left for high performance computing, as superscalar and SIMT are simply more flexible (superscalar is better for a single thread performance, and SIMT for highly parallel
streaming workload). SIMD also didn't help, since it's available for both SIMT and superscalar - negating parts of the VLIW advantage.
Case in point: GPU is one area where the workload is better suited for VLIW. Yet, AMD moved away from VLIW as their new architectures are not VLIW. nVidia has been SIMT for a long time.
The niche VLIW still has some values is in DSP, where the overhead of extra die space for superscalar becomes significant, and the workload is predictable.
You needn't use your real name of course, but for HN to be a community, users need some identity for others to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
There are a large number of companies that make the equipment for the fabs. For example, Applied Materials and LAM Research in the Bay Area also make a a wide variety of machines. ASML makes mainly the lithography equipment which turns out to be quite expensive. Its like each vendor provides some of the equipment in the kitchen and TSMC is the chef.
I don’t know how to make a semiconductor, but does that mean that Intel couldn’t make a 7nm even if you gave them access to TSMCs fab, because they don’t know how to design a 7nm chip?
Although it's maybe complicated by the fact that Intel has active development on 7nm, so they might be able to pick up hints from an existing 7nm line.
It is always more convenient to use existing capacity at existing vendor (A), than develop a new capacity (B). This has also the effect, that the vendor A gets a bigger part of the pie and there's no money left for the development of the capacity for B.
So once you get the Huawei treatment, you have no choice and do B.
It doesn't matter how many sanctions you pile on top of each other, if you don't have a proper fab there ins't much you can do about it but plan for 15 or 20 years, shower a shitload of money on it and maybe you have something technologically capable, but you won't have a market like TSMC so good luck making it economically viable.
In most countries your average consumer can't buy a subpar CPU for 400$.
The point is, that if you do shower money on TSMC for the next 15 or 20 years, you won't get your own fab built. You give money to someone else to get short-term advantage, at the price of not having long-term capacity & control yourself. You basically co-finance R&D that in the end will get owned by someone else.
No, your new fab won't have the market like TSMC today; but so won't TSMC tomorrow, when the market splits.
The same thing was Intel vs. rest of the fabs; TSMC made it. Just like Intel couldn't keep themselves on the top forever, the same will happen to TSMC.
I’m fairly convinced that the authoritarian regimes will eventually come to realise that pooling their resources to jointly develop manufacturing technology will behoove them as it will allow them to build their own fabs to manufacture their own devices.
The know-how that will come from this collaboration is useful to all of the participants, and it puts them in a position to be independent of the USA and each-other (if each builds their own fab), and doesn’t require revealing their processor IP.
There’s other examples of collaboration between authoritarian regimes. Most of the time (in the 1970s and 1980s in particular) OPEC was hugely effective (just think of the oil crises of the 1970s when they curtailed production) despite the regimes not always being particularly open to collaboration. And realise that collaboration in a cartel such as OPEC provides each member with an incentive to defect and sell more of its own oil at the higher price. A consortium of regimes banding together to develop chip-manufacturing technology would not suffer from such an incentive (after all, they can sell it or divulge it, but that doesn’t reduce it’s value at all — and it’s zero marginal cost anyway).
> It's great to see that countires are starting to become more and more technologically independent.
I'm of two minds. On the one hand, I like seeing a diversity of technologies. On the other hand, economic interdependence is one of the forces that prevents countries going to war.
So x86/64 support via BT, cool. But I couldn't readily find any (English language) docs describing the native instruction set, have you folks seen any?
In fact you can't use this manual to create compiler for Elbrus.
It looks like you still have to be russian citizen with signed NDA to get all required documentation.
>"All of the world’s major superpowers have a vested interest in building their own custom silicon processors. The vital ingredient to this allows the superpower to wean itself off of US-based processors, guarantee there are no supplemental backdoors, and if needed add their own."
Any major company that deals in anything Internet, Web or Cloud related -- also has a "vested interest in building their own custom silicon processors", for exactly this reason -- security.
Some have realized this (Apple, Google, etc.), and have the resources to do this.
Others have realized this and do not have the resources to do this (smaller companies).
Still others have not realized this yet (thinking that software virus protection will solve problems which are in hardware), and are still in the process of waking up.
Prediction: We're going to see a lot of new CPUs and CPU designs by many players in the next 20 years...
No surprise that it's got a large L1 instruction cache given that it's a VLIW. Those aren't known for small code sizes, though I wonder why we don't tend to see variable length VLIW architectures?
"On Monday, Intel representatives confirmed reports in Russian-language newspapers that the American chip giant had hired approximately 500 engineers and related staff from the Elbrus Moscow Center of Sparc Technology, a state-sponsored design house in Russia. Some of the engineers will be hired away from Unipro, a related company. The new hires include Boris Babayan, Alexander Kim and Ivan Bolozov, said to be the architects of the E2K processor, a failed “Itanium-killer”."
And same happens in past too: https://www.theregister.com/Print/1999/06/07/intel_uses_russ...
But i couldn't get information more about this team.