Hacker News new | past | comments | ask | show | jobs | submit login
VMware Confirms Layoffs as It Prepares for Dell Acquisition (techcrunch.com)
170 points by walterclifford on Jan 27, 2016 | hide | past | favorite | 112 comments



Inflated and leveraged equity assets seem to be driving a M&A spree. This isn't limited to domestic affairs in scope, as Chinese companies with high earning multiples are leveraging their equity to buy out assets abroad (ex: one such company bough a GE business unit for ~$2b).

Ee should expect more such events to come. I wonder how much of the Yahoo! Workforce will be laid off once they spin off their core assets.


Found a post on another website today that sums up the developing economic landscape quite well (edited to be more appropriate for HN):

>The [austerity/ free market advocates] need to quit [complain]ing and just accept that we need to stimulate the demand side of the economy. The US and Europe are currently headed down the same course that Japan went down, and they're stuck in an inescapable stagnation. We've implemented the same QE and austerity policies that they did years before us, and now we're beginning to experience the same stagnation and gradual decline they did. Just like Japan we're propping up large, [poorly managed], decaying businesses at the expense of everything else in our economy, all while starving our economy's consumers of money.

>We've printed tons of money to fund big business, and cut back on social services so businesses won't have to pay as many taxes, and yet we only have a worsening economy to show for it. Unless you want the stock market to be the only segment of the economy that doesn't collapse, quit [explative] complaining and just hand people money.


The problem with EU is neither austerity nor stimulus. The problem is that there are very few local products sold globally and locally and employment is both scarce and ill paid for the populace at large, since local industries moved to cheaper pastures in the 90's or died from competition.

There is no crisis to fight in Europe, this is the "new normal" - unless they can get over their collective asses and move forward what schengen has started and become one grand federal state, otherwise everyone will proudly sit on their ruins in a couple decades.


Why do you think federal state with the same fiscal characteristics across EU would help anything? That would bring only deeper centralization, out of touch decision making and inability to capitalize on locality advantages. We already see how Germany utterly dominates EU while giving scraps to the rest, even to France. Now imagine the whole EU would have the same taxes etc. as Germany, but individual states will have 10x less capital. How would that help EU?


well I was more daydreaming than anything, I don't think more superstructure are needed, especially not under the grip of germany.

but the barrier of entry to the market makes it a very uncompetitive are to start a business. just think about the new bullshit vat collection mechanism for online companies, think about southern countries being forced to keep their immigrants while stronger countries can sidestep the Dublin III regulation at their convenience, etc etc.

there is a great deal of unfairness, uncertainty, instability and risk in investing in Europe right now. Sure individual countries are fine by themselves, but if you consider them alone then china, usa, india and russia are all bigger markets with more prospect for growth than any single EU nation, while tapping the whole EU as a market means you need to comply with umptheen different codes, customer laws, taxes, exchange rates etc. yes, everyone use the euro except those who don't, and amongst those who do the purchasing power varies by countries so much it's common to have price tiers between different countries.


Your blurb seems to contradict it self. It claims we need stimulus then it says the West & Japan tried this already and it hasn't worked.

Maybe we need to allow the free market to function. Who cares if house prices drom from $250,000 -> $50,000 and the Dow from 16,000 -> 3,000? They will eventually be snapped up by someone at some price, thus fulfilling 'demand.'

So much debt will be wiped out through defaults in this process as well, it will be great for the average person who will finally be able to afford to meaningfully participate in the investment world. Dividends yields of 2-6% mean nothing to a small time investor, but 20% is something that we could very well see in a liquidation market.


Businesses aren't demand-side. If money is printed, it should go to consumers.

Or better, find a way to raise wage levels, while dropping equities, especially real estate.

I'm a fan of government as employer of last resort.


You need perpetual growth because all math models used in financial industry are based on perpetual exponential growth - imagine insurance companies, profitability, attractiveness of financial services, hedging etc. is based on the assumption the amount of wealth (money) increases in time. This is nowadays done basically with inflation only (the profits in trading are more-less inflation these days).

Now somebody said that the humanity's greatest shortcoming is the inability to understand exponential function...


This has always confused me. Now, I'm not an economist (I have a M.S. in CompSci and do security research), but isn't the correct model for growth in a majority of businesses sigmoidal as opposed to purely exponential?

For example, let's imagine a company that sells toasters. At some point they're going to reach peak market saturation where everyone in the world has a toaster. Now during this process right when growth in it's peak we see an almost ethereal phenomenon that capitalism delivers. We see the most optimized form of an idea materialize which everyone can benefit from and gain wealth. Wealth in this case would be owning a toaster in it's most optimal form according to the laws of physics and current engineering processes.

The problem is that once this epoch takes place the projected exponential growth for the business just vanishes. Instead we see gradual decline until the new market cap (replacing broken toasters etc.) is reached and an thus an equilibrium in the market is reached. During this process we see the nasty side of capitalism as the toaster company fights tooth and nail to prevent this inevitable conclusion. We see monopolization, deliberate weakening of the integrity of the product to increase time to EOL (which wreaks havoc on the environment), absurd patenting and copyrighting, digital rights management, lobbying and bailouts. This not only hurts the customer, but the economy as a whole as our political system is set up to encourage this type of behavior.

Now, one of the main themes I'm seeing in this discussion is the problem of demand. It seems like a lot of "things" that our economy has relied on consumers purchasing has hit this market saturation point and is being limited by technological advancement.

I think that we are at a point where we need to change our financial structure to one that helps create demand. With the minimum federal wage remaining relatively stagnant for the past few decades along with more and more low level jobs being replaced by automation people simply don't have the money to buy things let alone make investments. The only real solution I see is a push for a universal minimal income. But then again, what do I know? Like I said, I'm not an economist..


>wealth (money)

In most existing monetary regimes, money is not wealth, but debt.

>You need perpetual growth because all math models used in financial industry are based on perpetual exponential growth - imagine insurance companies, profitability, attractiveness of financial services, hedging etc. is based on the assumption the amount of debt (money) increases in time.


> Your blurb seems to contradict it self. It claims we need stimulus then it says the West & Japan tried this already and it hasn't worked.

That is not what his comment says. It says we're not doing the stimulus needed.


At this point, I wish folks would look back at the USA from the conclusion of WWII to 1948 and after the 1920 crash. I don't think stimulus is the real answer.


I guess you weren't around for the financial crisis of 2008/2009. The entire monetary system was under potential collapse and there would have been a severe price to pay.

You can't have houses lose 200k in value without severe repercussions to the economy around the world. House mortgages are connected to mortgage securities which are connected to pension funds, etc.

If we enter a massive deflationary period as you seem to think is good, then everyone will be holding cash and no one will be spending it because everything is going down in value. You can't get 20% dividend yields in a deflationary world.

Be careful what you wish for.


I guess one way of looking at 2008/9 is that we avoided for a while what will be inevitable. That Severe price to pay is one I think we need to pay eventually. Forestalling it is possible, but it seems more responsible to realize that the longer we avoid it, the worse it gets.

Nobody is going to make money in a deflationary period, and excess consumption will fall, but it is still necessary to consume at baseline. The stock market is a very poor representation of true consumption, as it prices in too much of the future viewed through a very specific set of circumstances and assumptions.


That blurb wasn't posted posted in the context of stimulus: it was posted in the context that we're entering into the same stagnation that Japan has placed themselves in where there's a feedback loop between low consumer demand and businesses cutting both wages and hiring, further hurting consumer demand which then further hurts the economy.


Many countries are loaded with debt to the maximum their debt/GDP ratio allows and will be in big trouble when even slightest recession appears - so they will prefer to apply stimuli even if it doesn't make life better for average Joe.


I've been saying this for a couple of years now. When wealth/wage inequality gaps widen and more and more money is funneled into the top .5%, those people and orgs simply do what they do best: financial, industry, and market manipulation in order to strengthen their positions.

Its very dangerous because it makes the already wealthy and powerful even more so, and reduces choices for the consumer, while actually driving up costs because less competition.

While its a very simplistic point, the oil industry is a good example of this. Ignoring the greater geopolitical/strategic game around the petrodollar as a reserve currency, what is happening right now is all the strappy young startup oil companies are being gobbled up by the big guys. I'm pretty sure as soon as all the m&a's are done oil ppb will rocket back up.


The challenge with this statement is that Dell is privately owned, so "private equity" as opposed to "public equity"...harder to be overvalued.

If what you say is consistent, then EMC/VMware is overvalued in the M&A activity here (and honestly it might be).

The best correction I can make is that "leveraged private equity is aquiring inflated public assets"


Wait, how does too high prices for stocks mean cause companies to buy stocks? In the hope that the price goes up further, ie a classic bubble?


A little late. I got the call yesterday


Are there more layoffs than usual now or am I just imagining things ?


What departments/roles are these targeted at? Are they more support roles - HR/Training/IT or are they more engineering/products - engineers, PMs etc?



That is sad. Desktop virtualization is incredibly useful. And vmware are the only one that can pull off a macos virtualization in windows host decently so far.

But workstation was for long left on the backburner - we haven't had there a lot of killer features since '07 probably. So I guess it is not a new decision.

Clarification: I mean that workstation was deprioritized by corporate, not by the team that worked on it.


Workstation was heavily developed up until, well, yesterday really.

I personally spent 2 years of my life, starting in 2008, bringing Unity to Workstation on Linux and making it work with every combination of Linux and Windows I could throw at it. That work was continued by a teammate for several years.

I spent 3 years rewriting most of the foundation, UI, and server infrastructure for Workstation 8, bringing the ability to connect to remote VMware ESXi/vSphere servers, along with the server component of Workstation 8. This work allowed VMs to be hosted on any server and accessed from any other server, and allowed VMs to be pushed between servers. 3 solid years on this feature alone, given just how much was needed to make that happen.

In the same release, we replaced the old Teams feature (a single feature that provided a multi-VM UI along with software-defined networking segments) with a series of more independent, more useful features. These were just a couple of the major features released in Workstation 8, and with all this came cleanup in the UI to keep the experience sane, not bloated.

That came out in 2011.

Workstation 9, released in 2012, came with a web-based UI for interacting with VMs called WSX (a feature I dedicated a bit over a year to). It also added UI refinement for the features that come out in Workstation 8, more remote VM support, hardware improvements (USB 3, Hyper-V, OpenGL for Linux VMs, nested/Inception-like VMs), locked down virtual machines for IT, and probably more that I can't remember.

Workstation 10 followed that a year later, and brought guest hardware support for tablets, enhancements for Windows 8 hosts, more remote VM improvements, better command line automation for remote-controlling/creating VMs, and a bunch of other things. UI-wise, it was a smaller release, but it did a lot for the hardware support.

I left around this time to focus on Review Board (https://www.reviewboard.org/) full-time.

Since then, they released 2 major versions: Workstation 11 and 12. From what I can tell, these were largely about hardware improvements and performance improvements, less about major UI changes, but there's a lot that has to happen for these improvements. Hardware improvements are crucial to keeping the VMs useful in many situations. Performance was also a focus. While building these releases, the team was also busy helping out the View team by helping them consume bits of the Workstation/Fusion codebase. They also begun development of AppCatalyst and Flex.

There's also work that happened on Player, Ace, and other things, all throughout.

So that's a lot of killer features in my opinion :) I barely scratched the surface of 8, and didn't go into all the stuff we did in 6 and 7.

We were all very proud of the product, and often spent our free time working on it. I should point out, this was not a large team by any means. It was an amazing team, though. A family. One that will survive these layoffs, one way or another.


First of all, thank you, accessing ESXi servers via vSphere has been a god send over the years (FWIW, still running relatively recent version of vSphere on an ancient 2003 Server VM ;-)).

Workstation was my go to as well for desktop development needs for many years, but switched to VirtualBox after Workstation 10. Kernel updates on Linux often broke Workstation. You needed to wait for VMware to release an update, upgrade to the next version (that would also soon lag behind latest kernels), or search for a patch over on Arch[0].

VirtualBox does the trick but Workstation's a better product.

https://aur.archlinux.org/packages/vmware-patch


For me killer features would have been -PCI pass trough, higher and better performing 3d driver, better work with hardware disks - that feature never quite worked on Windows at least. Remote/connect/management and so on - they are nice to have.

These features may have been prohibited by corporate management for some reason since ESXi have passtrough. And yeah - I view it from strong power user/developer/gaming angle, not sysadmin.


Hmm, I don't think management really ever forbid us from doing anything. I'll have to think about that, but that's not my recollection. It's more that we had a lot of customers in different segments wanting different things, and our own list of what we thought would make a good release, and only so much time and personnel to make things happen :)

A teammate just told us he's bummed he didn't have just a bit longer to work on Workstation, because he had a few things left he wanted to fix and rework for the next release. Our personal todo lists were so long, we could have filled another 10 releases... Shame we didn't have that opportunity.


I'd love to get Replay Debugging [1] back... That of course was already gone in VMWare Workstation 8.

That was and still is a killer feature.

I know about rr and such, it's just another level to be able to record whole system state.

[1]: VMWare Workstation 7 demo about Replay Debugging: https://www.youtube.com/watch?v=YjZWn3iDPiM


I don't know you. I'm not remotely a VMWare customer, haven't even used VMWare Player.

It just warmed my heart that you and your team put your heart and soul to that product and loved every second of it. Thanks for doing all that hard work and doing it the way it's supposed to be done.


Great job on building what was a great product. I've been a faithful Workstation and then Fusion user for the last 10 years, and it's always been much more reliable than its clone VirtualBox.

I was already a bit troubled by the shameless yearly waves of "upgrade begging", and by VmWare clearly keeping features back to try and segment the market (especially with Fusion, because "everyone knows Apple users are rich idiots"). This final nail will likely push our company to VirtualBox for good, it's become the standard in OSS circles anyway. You did well to leave, VmWare as a company has lost direction.


Been a fusion customer for years. Its the only way to do CNC on a mac, and its always been great. It always did what it said it would do. Really the best kind of software.


I have used VMware Workstation (and Fusion) for everything between playing "Burnout: Paradise" to writing C# applications in Visual Studio.


It could be that end of the year numbers have come in and some firms need to cut costs.


Not sure why the downvote. This is common practice in many major, non-growth, global companies.

The past two eyars, MSFT have cut a large number of people annually. CSCO have been cutting a large number annually since 2011 (though this is more of a migration of staff to lower cost economies/newer focus areas). HP have a long history of annual staff cuts. Even Google has been cutting staff on an off for the last 4 years (mostly post acquisition).


Ha, the company doesn't even have to be major or global. Non-growth is sufficient.


In general, what is the layoff policy and practice in US? How much the company has to pay, how much IT sector usually pays? Also, what are the usual time limits in the industry?


> How much the company has to pay, how much IT sector usually pays?

In most of the country (California is probably an exception), the company does not have to pay anything at all.

I don't know about "usually". A lot of these big SV companies seem to usually pay some severance.

> Also, what are the usual time limits in the industry?

What time limits do you mean? As in, what notice do they have to give you? In much if not most of the country, none.


> the company does not have to pay anything at all.

That's not exactly true, the company does have to pay unemployment insurance and each state has varying laws. Most states tax sites will have the documents.


Yep, I meant notice. Thanks.

Was just wondering as this might screw up with someone if they are short on money, have mortgage, etc.

In EU, there is some minimum notice time (I think two weeks?). In practice (from what I've seen) it's usually two to three months, but layoffs would be done immediately by agreement (basically pay for that period anyway). Minimum severance is one-month-pay (average, so bonuses included) for each year worked, limited to three.

So in real life, one would get 5-6 months pay when being laid off. Depending on the situation (reason for the lay off), one might not be able to spend their vacation and that would have to be paid off as well. On top of that you might apply for unemployment benefit (which you paid for, so I'm not sure how relevant it is).


In the US you have no such guarantee, it's completely at the pleasure of your employer.

I could walk into work this morning, get laid off, and escorted out of the building without notice. That's it, done, no more salary. To make it even worse, you also lose the subsidized health insurance. My employer pays about 60% of the cost for my family, so my monthly premiums would jump from $800 to nearly $2000 a month at the same time my income drops to near zero.

Sure there's unemployment payments, but they're usually only a fraction of your previous salary, and are often capped to a maximum by your individual state.

Welcome to capitalism ;)


Conversely you could also leave at a moment's notice for a better opportunity.


Well, legally that's true, but in practice, this makes you "not a team player", or something like that, and you won't be able to use the company as a reference ever.

You have to give at least two weeks warning to the employer if you want to leave, but if they want to fire you, they just show you the door without warning. Note that the employer may then just fire you anyways, so your courteous notice results in two weeks without pay. For some reason, this is considered fine, even in industries where it's a laborer's market.


While there are certainly legal differences across countries, I think the stigma applies universally. It's weird, especially considering many workplaces seem to want to project an image of cohesiveness and team spirit and what not, but when it comes to practicalities all of that goes out the window in favor of the bottom line. (I'm not saying the bottom line is unimportant, just that the warm fuzzy feeling you're trying to instill is worth nothing when the man comes around. The cake, as it were, is a lie.)


There are some, limited legal protections in the US, notably the WARN act: https://en.wikipedia.org/wiki/Worker_Adjustment_and_Retraini...

Still even provisions like that are inconsistently applied. I worked at an internal development studio of a large video game publisher that closed, resulting in the layoff of all ~90 employees. The studio had two offices: one in Los Angeles, California and another, smaller office in Austin, Texas. The employees at the LA office were covered under the WARN act and thus paid/insured through the end of the year (2 months) and received company severance on top of that. Those of us at the smaller Austin office were not covered under the WARN act as we had less than 50 employees at that location. Instead we received a check for our last two week pay period and the aforementioned severance package.


I'm not sure this applies over all of the EU – certainly in the UK it's not true. (I learned this the hard way. Note to anyone working in the UK: in practice there's a minimum probationary period of two full years, regardless of what your contract says.)

Anyway I'm not sure this scenario is any better than the US scenario of little to no employee protection. Sure, it means you're not flat and your back from day one, but the cost firing anyone is substantial, making it very difficult particularly for small business to hire simply because the risk is too great. I'd be keen to learn if there are other ways to reduce that risk than simply making it easier to fire people indiscriminately.


No warning is required. In practice it depends mostly on the size of the company and the size of the layoffs... I think most companies would like to layoff without warning, but that's not always practical. It also depends on if they are hoping to reduce headcount in general, in which case they may make a general offer of some money if employees leave... this isn't generally a viable strategy if it's e.g. a division that's being closed.

If the company isn't in dire financial straits, it's common to give some severance pay, usually tied to length of employment. This isn't a requirement.

With the US's ridiculous system of getting most health insurance through your employer, it does mean that you lose many of those health benefits. There is a legal provision called COBRA where you can continue for a time on your previous employer's health care (at your expense). The ACA (Obamacare) makes losing your health insurance less dire than it was in the past (for things like "pre-existing conditions"), but it still sucks.


Wrong. Read up on WARN. There are conditions where an employer is required to give warning and others where no notice is required.

https://en.wikipedia.org/wiki/Worker_Adjustment_and_Retraini...


As the other responder stated, even in cases where this law applies the employer can opt for "pay in lieu of notice". So 60 days severance pay obviates the notification requirement. That aside, there are a lot of scenarios where this doesn't apply (e.g. "If 50 to 499 workers lose their jobs and that number is less than 33 percent of the employer’s total, active workforce at a single employment site"). I'm not going to pretend to have numbers to back it up, but especially in the software industry this exemption seems like it would cover a fair number of layoffs.


Aside from the many ways to get exceptions, this little nugget....

>An employer who violates the WARN provisions is liable to each employee for an amount equal to back pay and benefits for the period of the violation, up to 60 days.

...pretty much says companies can do what they want. For a lot of these folks, that's a severance.


I am so curious why dell is buying EMC/VMware. What position/product do they hope to achieve ?


I suppose Dell already has a chunk of the server hardware space, makes sense to get into virtualisation since its a closely related industry and probably represents a big use-case for their hardware.


It also gets them into a "hybrid cloud OS", IaaS model...lines up well for competition with BYO Datacenter+OpenStack, or Windows 2016+Azure. VMware owns many, many assets beyond vSphere and has a strong chance (IMO) at becoming an incumbent hybrid SDDC provider with Virtustream, vCloud Air, VCloud Air networking, etc.


They would own the datacenter, theoretically. People could come to Dell, and Dell would supply all of the hardware, from the storage to the virtualization to the servers. With VMWare owning Nicira, I guess that also including the virtual networking as well. It could be a compelling story, except IBM has already talked about how datacenter sales have dropped, so the US at least is probably overcapacity at this point. Bad timing, but not unexpected for a company like Dell, some this knuckle-headed is expected more from HP.

The funny thing is, virtualization is dying. All of the large private datacenters these days are non-virtualized bare metal on commodity hardware. Virtualization is an unnecessary overhead when it comes to datacenters these days. And enterprise storage is also not "web scale" since it's faster to shard data across 100k servers than have a single huge database with a single point of failure.


> All of the large private datacenters these days are non-virtualized bare metal on commodity hardware.

This is so hilariously wrong I can only assume it's from a marketing brochure.


Just one data point, but at Spotify we have almost all our stuff deployed in our data centers. ~10K machines, all bare metal, barely any virtualization, in a handful of sites.

It's just cheaper and easier for us to have a lot of hardware racked and stacked and then spin it up as needed without having to worry about virtualization. Experience for our devs is pretty much the same whether they're provisioning a bare metal machine as it would be for a VM.


Out of the SF bubble, the exact opposite is true. In Enterprise environments, racked servers running VMWare hosts still reigns supreme.


It basically is. It's almost exactly verbatim a statement made by Google when talking up their container game.


Doesn't Google make up a large portion of the private datacenter space?


> The funny thing is, virtualization is dying. All of the large private datacenters these days are non-virtualized bare metal on commodity hardware

This is wrong on so many levels. Virtualization is, and will continue to be, a integral part of datacenters. First and foremost, it enables you to deploy one single image to any machine you have, if you need computation nodes. Or it enables you to have multiple VMs on the same hardware. Both of those apply to commodity and server hardware a like.


I think "dying" is perhaps a bit of overstatement, but there is some merit to the notion.

Containers may change the game. If you can containerize on top of something like Mesos, Kubernetes, etc - there is no need to run on top of a virtualization layer.


Virtualization was a buzzword, and per se, used for plenty of things it shouldn't have been used for. But it will continue to be a cornerstone in any IT infastructure for the foreseeable future, at the very least until the containers mature to a point where it can replace virtualization because it has near bulletproof sandboxing.


> The funny thing is, virtualization is dying. All of the large private datacenters these days are non-virtualized bare metal on commodity hardware.

Well, you sound like a Google employee, because that's pretty much their party line.

Of course, reality doesn't mesh up with that when you step out and look at other data centers.


This is a very, very wrong, and unnecessarily harsh and personal in its wrongness, comment. The person you are replying to is dead on and I suspect you are conflating "I support a lot of virt environments in my profession" with the state of the industry.

At scale, in prod, virt is legacy. Aside from Linode where I obviously ran virt, the only virt I've ever touched in private datacenters is relegated to labs or testing farms. We have a lot of tech, both from supercomputing and the new valley stuff which is inexplicably rewriting all of that, that makes virtualization completely unnecessary outside of a multitenant situation with separate paying customers and security domains. Even there multiple vendors are working on it, notably Intel, who is pushing VT-x into containers with multiple efforts.

Don't be so confident to talk about reality, because yours is very different from mine. The original poster was unguarded with their claim but in terms of winds of the industry, they couldn't be more correct, and if you think I'm wrong you're on the wrong side of the shift that is coming, nigh already here.

Example: We bought space in Virginia for Foursquare and didn't deploy a single byte of virt. My current employer has dozens of facilities and virt is a lab thing. No prod, anywhere, across dozens of products and lines and organizations, uses virt. Google doesn't, right you are. Nor does Twitter, who is deep into Mesos (same for everyone who is also deep into Mesos or its many friends).

You might scoff and say well, my CIO says, and you'd be correct today, but the state of the art for resource utilization at scale moved away from virt because we figured out that running full operating systems next to each other as a bandaid for bad CD and provisioning and resource allocation stories is a shitload of overhead for zero gain. There is absolutely nothing that virt gives you, aside from a perf hit and unpredictable low-level behavior (some of us care about cache lines and context switches), which cannot be implemented with tooling atop bare metal platforms. Virt makes your hot aisle hotter so you don't have to figure out bare metal provisioning. You should care about that, then figure it out. It's not hard.

If you are strongly convinced that I'm wrong, much like your neighbor posters, the state of the art simply hasn't made it to you yet. Sorry. You should be willing to consider, however, instead of sniping at change like your tone will keep it at bay. There is a lot of denial in this thread, and it's trivial to deduce why that is.


> the state of the art for resource utilization at scale moved away from virt because we figured out that running full operating systems next to each other as a bandaid for bad CD and provisioning and resource allocation stories is a shitload of overhead for zero gain

And people are moving to OS level virtualization instead. But the point still stands, for any independent datacenter, there is plenty of business sense in using virtulization to serve customers needs. Heck, even for an internal datacenter, hw virtualization makes sense for developement, testing, and general infrastructure.

For application specific purposes virtulization was never really a good idea to begin with, plenty of people have said that ever since it became a thing, but "State of the art" was virtualization. Now the developers realise that virtualization wasn't the way to go, and revert back to bare metal, or OS virtualization. It is a classic example of a fad because it was a buzzword.


Amazon's virtualized scale is the counterpoint to bare metal being the only "state of the art".

Virtualization is not going anywhere, and I don't think most companies will be adopting bare metal en masse as you suggest especially as virtualized extensions to containers like rancherVM become commonplace.

Also, many in the financial sector prefer the security benefits of hypervisor and VT-x isolation to reduce exposure to kernel/hw-level exploits.


> This is a very, very wrong, and unnecessarily harsh and personal in its wrongness, comment.

In other words, you are ~very~ (edit:) Exceptionally offended that your opinion is different than mine from your experience, whereas most of the paperwork on the industry as a whole does not agree with your experience.

I get that you're mad, but you should take your emotions out of the equation and look at the numbers published by literally every industry analyst.


No. Your quote does not support your rewording in the slightest. You are interpreting me as angry and offended because it makes dismissing me easier, and because it is mirroring your own feelings, for what it's worth. It's completely irrelevant, but I am not malcontent at all even despite your downvoting me for a well-argued counterpoint.

Perhaps it is not me who should step back and reevaluate.

Part of the problem here is a lack of specificity on the industry. Since you invoked analysts, Gartner and the typical HN view of "industry" are wildly different, but I would posit what happens in what we typically call the "industry" is in the pipeline for the Gartner side in about a decade. However, tickers I would normally put in the Gartner/CIO bucket are aligning with me on this, more than you'd expect. Even Manhattan finance.

And yes, I am aware of CAGR forecasts for virt, but a big driver of the market's growth is expansion of virt deployment footholds thanks in no small part to momentum fueled by opinions like yours. There is also an incentive to sell virt by hardware and procurement vendors, because you need more fleet to do the same work under virt, unconditionally. The market will level off because fewer new projects and companies are reaching for virt as evidenced by, yes, Google, and half the other household names in the valley.

Thought exercise: Google published Dataflow out of their work on streaming architecture and said they are moving on from MapReduce (for the most part). If you got research that says the Hadoop market is growing, wouldn't you look at it objectively in context since the very organization who defined the technology has moved on from it? Market research and analysis often lacks frontline context, much as it does here.


How about some references, all I see are some assertions being thrown out there?


>Well, you sound like a Google employee, because that's pretty much their party line.

Do you have a citation for this or are you just propagating your personal issues with Google again?


>> The funny thing is, virtualization is dying.

I disagree, the amount of dependency we have on virtualization at my current organization ( which has been a leader in On premises software systems ) is huge. And we are still generating the majority of revenue from there.


I disagree with your second paragraph.

What I see Dell doing is consolidating products in a space that is shrinking while offering a unified datacenter product which has two target customers:

* Bringing dinosaurs into the modern age with a "private cloud" * Bringing maturing organizations into the physical realm offering the advantages and alternatives to public cloud offerings


The funny thing about HN is how easily people discount Azure and AWS as the two largest computing clouds on earth, all virtualized.


I think there is a great deal of ruin remaining to milk in the enterprise market, and VMware has little competition in most customers.


> It could be a compelling story, except IBM has already talked about how datacenter sales have dropped, so the US at least is probably overcapacity at this point.

Counterpoint: Dell's modular datacenter solution is doing well and really murdering the competition.


SaaS EHR and Clinical Applications.


Just to clarify, VMware has delivery mechanisms for SaaS EHR and Clinical but does not own them. They do partner with EPIC, McKesson, etc etc to make sure app and/or desktop visualization works well with them in clinical environments.


I really wish Cisco had bought EMC/VMware instead.

Instead, I fear EMC will now end up with Dell quality. Ugh.


...as opposed to Cisco quality? Interesting position.


Storage != Networking.

Storage is an end-point. You can control almost all the variables inside. Networking is in the middle. You can control almost none of the variables.


Cisco does enterprise better than Dell does IMO.


I would have agreed with you 5+ years ago. But Dell, after privatization, has really lifted its game. Compared to HP, Lenovo & UCS are better quality & their customer support is leaps & bounds ahead.


Admittedly, I haven't dealt with Dell much in recent years and I admit probably having some opinions colored by the various Dell fiascos on the consumer side (root cert etc.).

My interaction with EMC on the other hand, particularly on the CE side has always been positive. Their CEs are consistently some of the most competent people I've dealt with across multiple vendors; Oracle, Netapp, etc.

Lenovo etc. to me are more targeted towards a consumer audience. My enterprise customers demand the high touch experience and subsequently drives my interactions with vendors too. So to me there's a clear difference between various players. Not right or wrong but a difference nonetheless.

Besides, VCE not withstanding, it just seemed to me Cisco's product lines would have been much more complementary to EMC&VMware's than Dell's but that's just me.


I wonder if the GPL lawsuit had anything to do with this? I guess it's not a good sign when a company will not fulfill its GPL obligations.

https://sfconservancy.org/copyleft-compliance/vmware-lawsuit...


The first version of VMWare server was great

Then apparently version 2.0 was handed for the Java and XML fetichists to be developed and it was a huge bloated mess. It worked, but badly.

It looked like they knew what they were doing at first, but today we have virtualization services built into the processors and in the OS, making things easier


The founders (who were married) left the company, the staff went to other startups: there's a few at ElasticBox and some went to Microsoft IIRC.

They 'won' virtualisation, but then virtualisation changed to 'cloud' and they never hung on.


If you mean VMware Server, the GSX replacement, and not the ESX products, then oh boy do I have stories to tell :) I was one of the, hmm, 5 or 6 people who threw that together in a span of months. It was more of a mess than it appeared, under the hood :) We called it the Frankenstein Project.

But it worked, and it worked well. More a testament to the infrastructure we had built for Workstation, and our team's ability to work hard and work well together.


On the Mac, I will be looking into Veertu and xhyve to replace Fusion.

http://veertu.com

https://github.com/mist64/xhyve


Can they boot the bootcamp partition?


No, I think that only VMWare Fusion and Parallels offer the ability to use a bootcamp partition to boot.


Ouch I have a friend that works for VMWare (didn't lose his job, well yet) but he was complaining that the restricted stocks he was getting (and well paying for because it's under the Employee Stock Purchase Plan) lost 60% of it's value already. Unlike stock options there isn't a guaranteed buyback price you just get the stock at 15% discount based on the market value at either the beginning or the end of the ESPP purchase cycle (the lowest of the 2) so while he hasn't lost actual money yet he most likely will get stuck with quite a bit of stock which might be impossible to offload and very well be worthless by the end of the ESPP.


> he most likely will get stuck with quite a bit of stock which might be impossible to offload and very well be worthless by the end of the ESPP

Neither of those are possible. The worst case scenario is a 15% return on your money, even if the stock does nothing but tank over the entire period. You purchase the stock at a 15% discount at the lower of the price at either the beginning or end of the period.

If the stock was at $100 at the beginning and $120 at the end and you put in $1000 you would get 11.7 shares of VMW stock (purchased at $85 each) with a current market value of $1411, you can sell that on the open market the next day for a 41% return.

Likewise if you put in $1000 and the stock went from $100 to $80 over the period you would get 14.7 shares (purchased at $68) with a market value of $1176, a 17% return. It doesn't matter how much the stock goes down, you're still buying a thousand dollars worth of stock at 15% less than market value, even if the stock went from $100 to $10 you'd buy 117.6 shares for $8.5 (which would be worth $1176 at $10 a share).


Factually false. I know people who lost money on ESP, granted it's rare. There is a non-zero delay between determining purchase price and when you can actually sell it. Stocks can and have fallen more than 15% in this window.


> Stocks can and have fallen more than 15% in this window.

Possible, but not common or likely. The shares are granted at the end of one trading day (which will be the close of that day if that's lower than at the beginning of the period) and are available for sell the next trading day. e.g. my last ESPP grant was on a Friday and I sold my shares at the open on the following Monday.

So for someone to lose money through the ESPP the stock a) would need to be lower at the end of the period then the beginning and b) drop 15% at the open. Let's say such a move happens once a year to the average stock, there are about 250 trading days in a year so you have a 0.4% chance of that happening. (realistically it's probably even lower than that since drastic moves like opening down 15% are much more likely to occur following an earnings announcement than on an average day and ESPP grants and earnings aren't aligned)


For instance, today, VMWare is down 9% and dropped 19% on Oct. 21, 2015.


The restricted stock is not tradeable for at least 180 days after it was issued. I've talked to him again his stock were issues and purchased at around 90$ per share now they are worth under 50$. He had about 400 stocks under RSU scheme some where issued as part of an employee compensation process some were purchased he definitely lost quite a bit of money. The process at VMWare for him was that he was getting some of his compensation in RSU, he was also allowed to purchase some of the stock from the company with his per-tax income (this was capped) after the purchasing round ends the stocks are issued and then need to vest for 6 months, during those 6 months the stock lost about 50% of it's value.


> The restricted stock is not tradeable for at least 180 days after it was issued

ESPP and RSUs are two different and unrelated things. Your original comment I responded to mentioned only ESPP.

As for RSUs they're tradeable as soon as they vest, which is typically a year+ after they're granted. The stock can certainly go down between when they were granted and when they vest but it's more accurate to say you made less money then that you lost money. If I buy a painting for a million dollars and give it to you (at no cost) and you then sell it for $800K you didn't lose $200K.

Don't get me wrong I (and every other VMW employee) would much rather see our stock go up then down, but I think you're being a bit unfair to the ESPP and RSU programs. Even when our stock goes down (and it's done pretty much only that since I started...which I hope is just coincidence, ha) it still works out better than the stock options employees at other companies (particularly startups) get.


for VMs on a personal computer, is there any reason to use VMWare over Virtualbox?


I haven't seen any benchmarks recently, but it's historically been noticeably faster. Vagrant's VMware plugin page has a general overview, although it's marketing material and not very technical: https://www.vagrantup.com/vmware/


Technically speaking, it is miles better: faster and more robust, as well as more polished.

However, it's losing the standardization field to VirtualBox, all OSS tools are using that by default and vmware support is always lagging.


Nested virtualisation. To my knowledge VirtualBox does not yet support this, and has been steadfast in its refusal for years[1].

[1] https://www.virtualbox.org/ticket/4032


I develop on linux vm running on a windows host, I've found VMWare to be considerably more stable and slightly more performant than virtualbox.


My personal experience: It depends on the guest OS.

I'm on a Mac. VMWare Fusion runs Windows 7 and 8 with noticeably less lag than VirtualBox. For Linux (Ubuntu Trusty) and FreeBSD 10, I've never noticed a difference.


I am trying VirtualBox from time to time but so far VMWare wins every time. It's faster and has better hardware compatibility.


Mostly serial or usb interface speed and driver issues when running windows hosts. Virtualbox is certainly getting better and better in that regard though.

There is also better gpu support from anedotal experience.


Related question: What about VMware/VirtualBox vs qemu-kvm?

Obviously its UI etc is much more raw - I'm curious how it compares performancewise.


Missed opportunity for HP.


Maybe, but they have their own storage lines, and have been investing heavily in OpenStack.

HPE may have been free'd from some restrictions, but I think many would have drawn HP-EDS-Compaq-Digital comparisons, and rightly so.


The notion of Dell acquiring EMC (and VMW as a result) is a little misleading.

This is private equity acquiring EMC because that was the company's last resort and it should color one's perspective appropriately.


vScrew 1.0


Good news for Ericom?


Why use vmware's stack over proxmox?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: