Hacker News new | past | comments | ask | show | jobs | submit | Kaya's comments login

I wouldn't say it's ridiculous, given that:

- It may not possible to produce an LTE iPhone 5 for either Verizon or AT&T right how (or for any other country), as the LTE chipsets are not ready and LTE deployments are limited. However, it may be possible in six to nine months.

- WiMAX chipsets may be more mature than LTE, and Sprint's LTE deployment may be widespread, making it possible to produce such a phone for Sprint.

- Sprint has promised to buy 7.5 million iPhones a year, which is a whopping 31% of US iPhone sales

- The depth of the Sprint commitment effectively means there is no room for Android on that carrier.

So, if Apple's choices are:

a) Produce non-LTE phone for AT&T and Verizon, only to obsolete it six months later in order to compete with LTE phones running Android.

b) Give Sprint temporary six to nine month "exclusivity" to a phone that they could not produce for anyone else anyway in return for 30% more U.S. sales and killing Android on Sprint

... then it's plausible for them to choose b.


Yes, but WIMAX is pretty much dead, even Sprint's dropping it for LTE. I just don't see Apple producing a new phone with WiMAX and having to test a completely different radio technology when they pretty much know it won't last for more than a few years.


There are several LTE phones out already, and Verizon is rapidly expanding coverage zones. Although, I suppose battery drain is the unresolved issue.


I'm a bit pessimistic; if the hardware is old and rushed, why would the software or overall experience be any better?

I suspect Lab126 is cooking up something much better--but, unfortunately, it's not yet ready for prime time.


tl;dr: company denigrates a new technology which could disrupt their business


In their quest to copy Apple, of course this feature is only implemented after it appears in OS X Lion. It's as if the developers lack self-confidence or imagination, or both.



The timing is unfortunate, but I doubt they designed and implemented this in a few days.


Lion wasn't the first, and Unity will beat Lion to market, so that's something.


How many of these "missing workers" are simply mothers, homemakers, and students? The article does not mention these classes.


The article is also using an extremely broad definition of the labor force: anyone over the age of 16. That definition can be useful for some demographic calculations, but I don't think it supports the conclusions being drawn here. In addition to your categories, it includes millions of retirees.


What on earth is wrong with that? Just because you're still in high school is no excuse not to be pulling your weight 40 hours a week (harrumph)


Also, the article starts off saying "The total non institutional civilian labor force (Americans 16 years and older who are not in a institution -criminal, mental, or other types of facilities- or an active military duty)", then goes on to discuss further the breakdown without ever mentioning military employment again.


When it comes to the military, you're only talking a couple million people; Not significant enough to move the needle.


"# Part time employed for non-economic reasons: 18.184 million people. Non-economic reasons include school or training, retirement or Social Security limits on earnings, but also childcare problems and family or personal obligations."

I would assume those people would fall into that category, but I might be missing something.


That category is those who are part-time employed due to these reasons. People who have no job at all because of these reasons – perhaps because they don't even want a part-time job – fall into the 100+ million people over 16 neither working nor seeking work.


How do you handle the case where file X uses file Y, and you commit file Y? You could run the test for X when Y is committed in case Y broke X.


For most of my projects I don't. Some of them have some rules listing test x depends upon y, but for the vast majority of my projects modify file X and only test X is run. This catches a huge non-insignificant amount of regressions. Really it comes down to effort. I can either A) try to remember to always run tests before committing or B) Have a basic hook that takes minutes to write/install that will always run at least one test and catch near all regressions.

I tried to do A for years. Most of the time you run the tests, but not always. And heaven help you if you are on a team. There will be someone who never runs tests. And you will end up having to schedule a chunk of time for regression fixing before every release. So to answer you question who cares about when file X uses file Y.


The author claims unequivocally that global warming is a hoax. Submitted as evidence is a page which one-sidedly discusses troubling, suspicious, and unseemly behavior by some climate scientists. That is bad, but not proof of a hoax--not by a long shot. Such a strong claim supported by such a weak source makes it appear (at least to the skeptical) that the author may not be equally skeptical of all sources or points of view. Confirmation bias is a known bug in the wetware which makes it hard to be truly objective about anything, try as we might.

Had the author made a weaker claim I'd have read the rest of the article with much less skepticism!


Huh?

Telling us how he could have satisfied you wrt a different argument is interesting but he question was "How much support should he have given for each of his arguments to satisfy you?"

Note that he suggested skepticism wrt multiple things, suggesting that general skepticism is a good idea, yet you're only discussing one of them.

Suppose we conclude that skepticism wrt global warming is unwarranted. What, if anything, does that say about his general conclusion?


It is not my intent to invalidate the author's thesis. I attempted to point out that skepticism is not enough; one must also apply skepticism consistently to all sources in order to free oneself from unconscious cognitive bias. As evidence, I use the author's treatment of global warming, where the evidence supplied is far too poor to support the conclusion. It's not a matter of quantity--it's just a blog post, after all, and I would have been tolerant of zero support for his claims. But if the author does supply support, it shouldn't be so crappy to as to suggest that does not apply a skeptical viewpoint to his own sources.


> I attempted to point out that skepticism is not enough; one must also apply skepticism consistently to all sources in order to free oneself from unconscious cognitive bias. As evidence, I use the author's treatment of global warming

Likewise, I use your treatment of his examples. As I pointed out, he provided better support for his global warming skepticism, which you rejected, than he did for his other skepticism, which you accepted.


Jinx can help verify mutex implementations themselves, although the example code that ships with the product is a little more advanced (lock-free stack). Some of the underlying technology is described here: http://s3.amazonaws.com/corensic/whitepapers/DeterministicSh... and here: http://www.corensic.com/WhyYouNeedJinx/CorensicHasaUniqueTec.... Because it's a hypervisor, it can aid in verifying synchronization primitives that are a mix of userspace and kernel code.


I do not see anything about memory fences. If Jinx does not support them, then it's pretty much useless for verification of synchronization algorithms. I've implemented dozens of advanced synchronization algorithms, and I may say that it's crucial. Also, if it works on binary level (does not require re-compilation), then it also renders it useless, because on that level you lose information about order of memory accesses, memory fences, atomicity. For example, if you see plain x86 MOV instruction, what is it? non-atomic store? atomic relaxed store? atomic release store?


This project looks great. VirtualBox runs on Solaris, an OS which supports writable snapshots and deduplication in the ZFS filesystem. Is there support for filesystem snapshots in Vagrant? It would be nice to spawn a new virtual machine from an existing on-disk snapshot.


I know Vagrant runs on Solaris, but I haven't done anything OS-specific for it, so I'm going to say "no" to this. Hit me up in IRC and we can talk about it.


Unfortunately the article does not mention the standard deviation of the distribution of mathematical ability after controlling for other factors. As mentioned here: http://www.lagriffedulion.f2s.com/math.htm, Larry Summers got in a lot of trouble at Harvard for making this assertion:

"It does appear that on many, many different human attributes-height, weight, propensity for criminality, overall IQ, mathematical ability, scientific ability-there is relatively clear evidence that whatever the difference in means-which can be debated-there is a difference in the standard deviation, and variability of a male and a female population. And that is true with respect to attributes that are and are not plausibly, culturally determined."

Is that assertion backed up by evidence? And, if true, is making this distinction useful in informing public policy and shaping our culture? In a fight between naturalistic and moral fallacies, which wins?


The article doesn't say a darn thing about methodology. For all we know, they could have done anything from comparing means to measuring the overlap of various confidence intervals to arcane statistical methods that predict the accuracy of classification based on the data.

Darn paywalls around scientific journals. I really wish more would go the route of http://www.plos.org/


I, too, was under the impression that this was pretty well accepted, at least as far as mathematical skills relating to the ability to handle spatial models.

Rephrasing your quotation, if you draw out the curves so that ability is on the X axis, and number of individuals having that ability is on the Y axis, then the curve for females will be taller and narrower than that for men.

That means that if you've got in mind a brilliant geometer, that person is more likely male than female. On the other hand, if you've got in mind an extremely bad geometer, that person is also more likely male than female.

Discussion of this that I've read have hypothesized that we've evolved this way because humans back a zillion years tended to have the males go out hunting while women took care of babies and domestic chores; and tracking and stalking prey demanded more of a person's spatial skills.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: