I really, really hate sales sites that don't list prices.
I also think their use cases are either a joke or written by a fool. "Scary for Apps"? Seriously? http://www.azulsystems.com/products/zing/use-cases Also makes a lot of bold statements that it doesn't even try to back up with numbers like "Better application performance under hypervisors than if run natively".
My scam sense tells me to run for the hills just from that one page. I hope for their sake that they redo that page in a hurry.
Keep this in mind: most companies that sell to enterprise and service provider customers aren't looking to make money with their website. Their SALESPEOPLE are there to properly qualify the leads and demonstrate the value of the product.
The whole point of not listing prices is to get customers to self-qualify. Those that genuinely find value in the products as presented will make the effort to contact the company. Price is really only something that is mentioned towards the end of the sale and indicates that there is considerable room for discounting from list price.
That's the path Sun Microsystems took in the earlier part of this century. The problem was, it was much easier to head over to Dell and buy a box and slap Linux on it. Ten minutes of your time, and you didn't have to try to convince a salesperson you were worthy of buying something.
You're not honestly comparing Sun hardware with Dell, right? You really think that Dell+Linux was competition for Sun?
Sun was selling to a particular customer that demanded rock solid stability.
If Linux had never happened, Dell servers wouldn't have made a dent in Sun's marketshare. Don't count out Windows either. They both ate away at the BOTTOM of the market for a while.
But neither of those have anything to do with a sales model.
If your hardware lists for $1M+ per box, you don't sell that with a webpage. You sell it after many many visits with the customerto be sure that this is really what they need and that they will be able to use it.
Go look for a price for the CRS-1/8 on Cisco's webpage and tell me if you see it. Same for Juniper.
The appearance of a price on a webpage is a clear indication that you will never speak to anyone at that company who understands your business or requirements.....EVER.
All I'm saying is that, even in 2001, it was difficult for a person to by just one or two Sun boxes. Sun didn't care about your business. They had salespeople to ahem qualify you. A hacker needing to get one or two unix boxes up and running was much more likely to buy an Intel-based machine and put Linux on it than buy a low-end Sparc from Sun. As a result, they lost mindshare.
So they didn't care about your business, right? That just means you're not their target customer, right? They didn't seem to lose mindshare with the customers that they were serving now did they?
I'm not Boeing's target market. It doesn't surprise me that they aren't going out of their way to post prices of 777 engines for my perusal.
In the end, you got the product you wanted (cheap Dell) and a business got a customer they wanted (dude with small budget).
I have personal experience with a company that moved off of Linux because they couldn't get decent enough hardware to handle their load. We could trade examples all day long.
The OP made a comment about web pages with pricing. My point was that lots of companies target customers that are not THEM. In fact, they often choose a strategy that allows them to explicitly target those customers that they think are their most likely prospects. Sometimes they get it right; sometimes they don't. Like a lot of things that drive company success, picking the right way to reach customers is an important decision.
Don't forget that Sun was purchased by Oracle for a princely sum. A lot of those Sun customers didn't seem to feel that their dollars were wasted.
If your hardware lists for $1M+ per box, you don't sell that with a webpage… The appearance of a price on a webpage is a clear indication that you will never speak to anyone at that company who understands your business or requirements…..EVER.
You do realize that Sun did have pricing for almost their entire line of servers and workstations on their website years ago, right? I distinctly remember playing with their configurator to put together a $1.6M+ many-processor system many years ago - if only my credit limit had been higher…
They might have had list pricing but I never saw that page.
Not just a page for pricing, they had an entire webapp for configuring and ordering systems. You could go in, select a server or workstation, configure it (CPUs, memory, disk, etc...), with the price updating as you went, and (IIRC) purchase it online. Not all at dissimilar to the Dell online purchasing process, actually.
This was around ten years ago, and they removed the capability not too long after that. That was clearly an attempt to diversify their sales process away from just the salesman, but for whatever reason it failed and they gave up.
List price or no, they obviously weren't content only having the process that you insist is just how it's done.
Hey, companies try new things all the time and then retreat when they realize that it's counter to the direction the rest of the organization wants to go. I'm willing to bet that some people felt that moving to an online sales model was a good idea. I just think that a lot more people felt otherwise.
AAPL tried to license their platform for a little while. That died too.
My point (to get back to the original discussion) is that selling $1M+/unit equipment is fundamentally different from selling boxes that cost $2K. The OP shouldn't be surprised that the price isn't listed b/c that probably indicates that the whole pricing exercise isn't a neat process and involves more $$$ that they would be willing to part with.
For a lot of other organizations, price is (very nearly) no object since they are often the only suitable customer for whole classes of product. Since they're the market, THEY set the price. If you're a Telco or CableCo, you are squarely within the market for high-end $1M+ routers. If you're a financial exchange, you're in the market for ultra-high performance OLTP systems.
Lockheed doesn't sell fighter jets to school systems. I doubt a vice principal would find it insulting that they couldn't order one via a webpage. That's what "qualification" is: determining whether a customer has a use for your product and the budget (more or less) to purchase it. Qualification isn't about making snap judgments about a customer; it's about making sure you're allocating finite sales/engineering resources across the most likely purchasers.
Of course I'm not surprised that million dollar machines aren't sold that way. Sun had $3000 Netras, too. And lots of inexpensive workstations. These are what I was talking about.
Azul is a great innovator in the JVM space, and I think their innovation will start to trickle down to other VM based languages like Ruby, Python, PHP, JavaScript etc. They've already open sourced some of their innovations: http://www.managedruntime.org
Not so much in vms and more in beating current operating systems into actually working for the applications. I think a few other language vms had the same issue but instead of trying to beat Linux into usability they used bare hardware via qemu.
When I used to work with JVM's all day I was always frustrated about having to decide how big a maximum heap for a given app would be. We had some xml processing tasks that almost never used much memory but every now and then would need tons. It was one part I couldn't thin-provision.
Am I right in interpreting this as shipping the JVM work off to a dedicated appliance-like server? Does that mean the memory profile on the appliance is the one that can be thin-provisioned?
Heap size tuning is one of those things that made me hate the JVM.
This technology could be extremely interesting -- especially if they make it easy to provision on EC2. Getting rid of the overhead to customize OS distributions simply to run a JVM would reduce TCO and time to market not to mention better resource utilization.
Simply regarding the pauseless GC aspect, what is preventing someone from adding an existing realtime garbage collector like Rollendurchmesserzeitsammler (hereinafter rdmzs) to OpenJDK? Obviously rdmzs is designed for audio processing, as it bases its heap size on the audio processing cycle time, but are these similar GC concepts, or is the "pauseless GC" mentioned here something completely different from what rdmzs does?
I'm not familiar with the internals of rdmzs, but there's a fair bit of actual technical information available about the Azul collector. It makes heavy use of manipulating virtual address mappings, and by this gains the property of being relatively insensitive to heap size.
My impression is that it's a copying collector that works on pages at a time. When a page is relocated, it's marked protected in the memory address mappings, allowing the runtime to trap and fix any stale pointers at the moment they are used. At the same time, the collector proper uses memory fences to keep track of what pointers have been fixed, eventually allowing the protected pages to be released from the memory map.
I don't know GC literature very well, but AFAIK this design is novell. The original implementation made heavy use of high performance memory primitives supplied by the Azul hardware. I gather they now have found a way of running with acceptable performance on commodity x86 processors under the virtualization.
This sounds brilliant, virtual memory is a hardware feature which is currently sorely underused in user space. Unfortunately, most mainstream kernels don't give you deep enough access to the VM subsystem for serious trickery. For example, you can't map the same memory at multiple addresses unless it's file-backed; there's just no API for it. I have to admit I don't know if this would cause problems with caches on certain hardware.
In any case, you're free to mess with (virtualised) page tables when the parent is a hypervisor.
While I'm not a systems programmer, from what I've read (particularly from Poul-Henning Kamp) I think enriching the virtual memory abstraction is going to be a key area for OS innovation shortly. MMU's and the signaling mechanisms offer a unique ability to trap events happening dynamically within your program. I think by revising the userland/kernel API, I think there's likely many interesting ways for userland software to better utilize the MMU's in modern processors.
I also think their use cases are either a joke or written by a fool. "Scary for Apps"? Seriously? http://www.azulsystems.com/products/zing/use-cases Also makes a lot of bold statements that it doesn't even try to back up with numbers like "Better application performance under hypervisors than if run natively".
My scam sense tells me to run for the hills just from that one page. I hope for their sake that they redo that page in a hurry.