Hacker News new | past | comments | ask | show | jobs | submit | geromek's comments login

iPhone 5 user here with iOS 10.1.1 . I haven't noticed the issue, just the understandable decay of my 4 year-old battery that lasts less than it used to be.


Me too, iPhone 5 / iOS 10.1.1. No discernible issue.


AI is the biggest hype of the moment. It reminds me the film "Eagle Eye" (2008) where a kind of of Zeroth-law empowered AI wants to assassinate the president of the US. Despite its incredibly intelligence what I found more unrealistic was its control of all internet-connected systems in the US (traffic, remote control drones, phones, different OS, etc) "just because I am an AI and I can do whatever I want"

For god's shake, it is 2016, we are still unable to have a decent dependency system for most programming languages. AI is still decades far to rise up against us.


"Despite its incredibly intelligence what I found more unrealistic was its control of all internet-connected systems in the US (traffic, remote control drones, phones, different OS, etc) "just because I am an AI and I can do whatever I want""

On the other hand, "Mirai botnet". Even the "mere humans" are doing pretty darned good laying hands on vast powers in the current environment.

The hard-to-believe part is an AI that is much smarter than human. Once you have that, I have no problem swallowing that it could pretty much hack whatever it wanted in our current world as long as it could get a connection. That's not because I believe in magical hacking powers, it's because we demonstrably don't really care much about security and we get exactly the systems you'd expect as a result. My local red team [1] seems to be able to get past pretty much anything it wants to and they're not superhuman AIs.

An AI that operated in a world where all programming languages were memory-safe would at least have a harder time of things. Though between things like Rowhammer and much higher-level hacking (social engineering, for instance), I suspect it would still not face much challenge.

[1]: https://en.wikipedia.org/wiki/Red_team


To play devil's advocate. Let's say in 2030 we can fully simulate a human brain and it works. Let's also assume it runs 10000x faater than wetware (a highly conservative estimate?). That means in about 1 yr it should be able to assimilate as much info and experience as a 30yr old human. After that it could use it's 10000x speed advantage to effeftively have the equivalent of 10000 30yr old hackers looking for exploits in all systems.

I'm not saying that will happen or is even probable but when A.I. Does happen it's not inconceivable it could easily take over everything. I doubt most current state actors have 10k engineers looking for exploits. And, with a.i. that number will only increase as the a.i. is duplicated or expanded.


Let's say in 2030 the aliens invade and it works. Let's also assume they're 10000x more powerful than wetware (a highly conservative estimate?). That means in about 1 yr they should be able to attack us as effectively as 30 years of human war. After that they could use their 10000x power advantage to effeftively have the equivalent of 10000 30yr wars looking to wreck havoc.

I mean at this point you're just making things up...


Honestly, making things up is exactly how we prove if it works or not. For example:

Suppose I can talk into a tiny little device and someone can hear me from miles away.

That sounded like witchcraft at some point but it became a reality. Futuristic thinking needs to suppose lots of crazy-sounding things are possible.


Compare with:

Suppose I can think into a tiny little device and someone can hear me from miles away.

That still sounds like witchcraft. Before one of them gets invented, or at least the basic underlying principles are discovered, how do you tell them apart?


Simulating a brain is not aliens. Or maybe it is an I'm hopelessly naive. Lots of smart people are working on simulating brains and there estimates are that's it's not that far off from being able to simulate an entire brain in a computer, not at the atom level but at least at the functional level of neurons. At the moment there's no reason to believe they're wrong


> The most accurate simulation of the human brain ever has been carried out, but a single second’s worth of activity took one of the world’s largest supercomputers 40 minutes to calculate.

http://www.telegraph.co.uk/technology/10567942/Supercomputer...

The above supercomputer in 2014 was 2400X slower than the human brain. Moore's law is dead so I think your 10000X and 2030 estimates are grossly optimistic.


If you applied your logic to DNA sequencing. The first DNA sequencing took ages and by various estimates at the time it would have taking 100+ years to fully sequence DNA. Fortunately exponential progress doesn't work that way and full DNA sequencing happened orders of magnitude sooner than those pessimistic estimates.

I see no reason to believe A.I. will be any different. But then I don't believe Moore's law is dead. We'll find other ways to extend it. Insert magic here.

http://michaelgalloy.com/wp-content/uploads/2013/06/cpu-vs-g...


But, but, Quantum computing.

Jazz hands


Wetware is unbelievably, ridiculously efficient compared to anything we can make out of silicon and metal. Our most complex artificial neural networks, endless arrays of theoretical neuron-like structures, are much smaller, less complicated, slower, and less energy-efficient than the actual neurons in a brain. Of course, we can make computers as large and energy-consuming as we want, but this limits their potential to escape to the web and doesn't address the speed issue.


General AI like that is not years or decades away. The problem hasn't even been stated clearly yet. AGI is probably a century away or more. It's not a resource problem, it's a problem problem. I attended an AGI conference a couple of years back with the luminaries of AGI attending (held at my alma mater, University of Reykjavík). The consensus was we didn't even know which direction to take.


The same argument can be used the other way. If we don't even know which direction to take, what makes you think that AGI is a century or more away? Say, in 10 years, we better understand the problem we want to solve and the direction to take, what make you think it would take 90 years to solve, versus 20 or 30?

I think we simply have no idea when this could happen, it could be in 20, it could be in 200. But one thing is sure, when it will happen, this will have drastic implications for our society, so why not start thinking about it now, in case it's 20 and not 200?


We should. I was not expounding a policy. I was merely stating the results of a conference on the subject at hand. All of what was discussed in the article is weak AI. Narrow programs delivering narrow features. Social engineering doesn't require strong AI.


"If we don't even know much about our universe, what makes you think that an alien invasion is a century or more away?"

Yet I don't see us losing our heads over the chance of an alien invasion.


Based on the fact that there hasn't been any known alien encouter during human written history, that we haven't found any artifact of such an event, even in a distant past, that a 100ly radius is really tiny at the scale of the galaxy, that we haven't found any sign of life outside earth, and that anyway, if an alien civ is advanced enough to come here and invade us we can't really hope to do anything againts that, there is indeed no need to spend time worrying about that.

Considering the evolution of computing and technology in general in the last 50 years would you consider the two things to be remotely comparable?

I personally don't.


Neither have we experienced a true AI, and none of the gains in the last 50 years have brought us anything near it, only more advanced computing ability and "trick" AI.

We just assume technology will improve exponentially based on an extremely small sample size. Has it never occurred to us that the technology curve may horizontally asymptotic as opposed to exponential?

The ICE was an amazing piece of technology that grew rapidly, from cars to military warplanes, to our lawnmowers. Yet we can not make them much more efficient or powerful without significantly increasing resources and cost. If you judged the potential of the ICE on the growth it had then, we'd be living in an efficiency utopia now.


Based on the fact that there hasn't been any known AI encouter during human written history, that we haven't found any AI artifact of such an event, even in a distant past, .... , and that anyway, if an AI civ is advanced enough to explode exponentially we can't really hope to do anything againts that, there is indeed no need to spend time worrying about that.


AFAICT there's 2 paths to A.I.

Path 1 is understand how the brain organizes info. Recreate those structures with algorithms. This path is 100+ years away

Path 2 is just simulate various kinds of neurons with no understanding of how they represent higher level concepts. This path is < 20 years away by some estimates

You probably believe path #2 is either also 100+ years away or won't work. I happen to think it will work the same way physics simulations mostly work. We write the simulation and then check to see if it matches reality. We don't actually have to understand reality at a high level to make the simulation. We just simulate the low level and then check if the resulting high level results match reality. It certainly seems possible A.I. can be achieved by building only the low-level parts and then watching the high-level result.


I have found myself repeating this paraphrased quote often recently, which is relevant.

"We didn't achieve flight by simulating birds; we needn't worry about achieving ai by simulating brains."


If the Hammeroff-Penrose conjecture is correct then the only simulation possible is whole replication with quantum effects present. The unreasonable effectiveness of neural networks makes that unlikely thou.


So... I dont want/need to think about this cause I'll be long dead? :) On a related note, how optimistic are you about life extending medical technology (which is likely to be accelerated by even minor advances in computing and AI)?


Not what I said. I'm cautiously optimistic about life extending technologies. I'm not sure computing technology and AI will carry the day, I'm more inclined towards the physical sciences.


Why do you believe it will run so much faster than wetware? It would be great to see the math. Also, how much power do you think it would take?


>Let's also assume it runs 10000x faater than wetware (a highly conservative estimate?). That means in about 1 yr it should be able to assimilate as much info and experience as a 30yr old human.

Assuming your assumptions, and assuming it had access to the information, you meant one day, not one year.


A bigger issue (as the NYT correctly postulates) is the industrialisation of cybercrime.


Evil AI is the ultimate in plausible deniability for evil people. They can just blame the algorithm for whatever scheme they're plotting. You could easily have evil AI god that we can't turn off doing evil things while the man behind the curtain is getting away with things that he never could if it wasn't attributed to the AI. Witness the recent YouTube channel takedowns at the hands of Google'a algorithm.


I really wonder if the man behind the curtain really has any control? The info-streams are exploding. And the control mechanisms are half-ass and unsophisticated. And not really in the interest of the tech world, that likes bragging about how many guinie pigs their networks can reach.

People keep worrying about Google or Facebook or the media controlling and shaping the info streams.

What if they cant? What if we have already passed the point where things can be controlled.

Does anybody really believe Google or Facebook will admit that they have lost control?


That seems to be true in the meat world. It probably has been for a long time. Leaders of the world may have the power to end it in a nuclear holocaust, but that's probably the extent of it - a few high-impact things that can be triggered by individuals, because they were deliberately designed like that. As for the rest of stuff, it is my belief that "the economy" runs by itself. We cannot stop it, because the economy is just an aggregate of "what humans do because they're humans". We can influence it by various degrees, but it's more like prodding a complex feedback system and seeing what happens - I doubt there exists a person or group of people who can tell the economy "go there" and the economy will follow.


I sometimes give a talk to startup companies, in which I tell them why their code should be horrible. It's an intentionally provocative thing to say, but there is reasoning behind it, and some of the same reasoning applies to a lot of scientific code. The linked article has a few comments that tangentially touch on my reasoning, but none that really spell it out. So here goes...

Software development is about building software. Software engineering is about building software with respect to cost. Different solutions can be more or less expensive, and it's the engineer's job to figure out which solution is the least expensive for the given situation. The situation includes many things: available materials and tools, available personnel and deadlines, the nature and details of the problem, etc. But the situation also includes the anticipated duration of the solution. In other words, how long will this particular solution be solving this particular problem? This is called the "expected service lifetime".

Generally speaking, with relatively long expected service lifetimes for software, best practices are more important, because the expected number of times a given segment of code will be modified increases. Putting effort into maintainability has a positive ROI. On the other hand, with relatively short expected service lifetimes for software, functionality trumps best practices, because existing code will be revisited less frequently.

Think of the extremes. Consider a program that will be run only once before being discarded. Would we care more that it has no violations, or would we care more that it has no defects? (Hint: defects.) That concern flips at some point for long-lived software projects. Each bug becomes less of a priority; yes, each one has a cost (weighted by frequency and effect), but a code segment with poor maintainability is more costly over the long term, since that code is responsible for the cumulative costs due to all potential bugs (weighted by probability) that will be introduced over the lifetime of the project due to that poor code.

So, short expected service lifetimes for software, prioritize correct behavior over maintainability; long expected service lifetimes for software, prioritize maintainability over correct behavior. The source code written by a brand-new company will be around for six months (maybe) before it gets factored away, or torn out and rewritten. During that time, less-experienced coders will be getting to know new technologies with foreign best practices, and those best practices will be violated frequently but unknowingly. Attempting to learn and retroactively apply best practices for code that will likely last a short period of time is simply more expensive (on average) than just making things work. The same applies to scientific code, which gets run for a graduate degree or two before being discarded. If the code wasn't horrible, I'd think that effort was being expended in the wrong places.

In my experience, most "fights" about best practices (whether a technique should be considered a best practice, or whether a best practice should be applied) usually boil down to people who have different expected service lifetimes in mind. (One of those people is probably considering an expected service lifetime of infinity.)


The story is more complex than the article says. A Spanish forum (one of the top-40 more visited sites in Spain) voted massively the name "Blas de Lezo" https://en.wikipedia.org/wiki/Blas_de_Lezo ) a Spanish admiral who in 1741 defeated a British Army far bigger than his own one.

After gaining the #1 position the organization decided to withdraw the name from the polling, causing more controversy about this digital process.


Certainly one of the funniest stories to pop-up in the interwebs during the last year..

> A band of Spanish net buccaneers has mounted a determined incursion into Her Maj's territorial cyberwaters by demanding that Blighty's forthcoming Royal Research Ship be named the RRS Blas de Lezo, in honour of the man who administered the British a serious military shoeing during the War of Jenkins' Ear.

http://www.theregister.co.uk/2016/03/29/boaty_mcboatface_spa...

It was going to win, so they had to shut down the candidacy. Not even the brits can handle the power of roto2...


TIL there was a War of Jenkins' Ear...


If you believe it would have been outrageous that this man had his name in a british ship, you'll be surprised by the shitstorm that would happen if his name would be given to a spanish ship.


According to Wikipedia, several Spanish ships have been named after him.


Has been. Try it now and see what happens.


Is "2004" considered close enough to "now"?

https://en.wikipedia.org/wiki/%C3%81lvaro_de_Baz%C3%A1n-clas...


No.


OK, keep downvoting. I couldn't care less. Do you think I'm being too terse? Take a look at the last edit. Or is it something else? I don't care either way.


Among other things, I guess those of us who don't know every piece of trivia about Spanish culture would get an explanation of what exactly is unsavory about this Blas de Lezo person.


You had it in the Wikipedia article linked by the op.

Edit... anyway: De Lezo took part in the Spanish Sucession War. In that war, Catalonia lost some fueros (middle ages common law) because it was against the Borbons (that won) and in the revisionist version of history that we're hearing in very recent years, that's tantamount to genocide.


Thanks, that was the explanation you could have given several posts ago. While skimming the Wikipedia article, I noted that he got a statue in Madrid, then skipped the rest of that paragraph, which was the only one that hinted at this controversy, and even then did not give the reason you give here. Please excuse us for not following the Spanish news so closely, and then downvoting you for your sheer arrogance.


My sheer arrogance considers you excused.


Please explain.


Excellent article, it explains pretty much the same (but with much more mathematical detail) as the article I wrote some months ago about why parser generator tools are mostly useless [1] .

[1] https://buguroo.com/why-parser-generator-tools-are-mostly-us... .


How is this different from SaneBox.com ?


Different branding, different algorithm, has an outlook plugin, has a summary function and different filters.


It is also automated filtering and prioritization, as well as its aimed for corporate with individuals receiving hundreds of daily emails, thus cannot filter them semi-manually.

Furthermore, there are next-best-action features, as to save time, keep organized and reduce noise...to move to appropriate folder automatically, clean an email until a later (more appropriate time), mute yourself from a conversation until named, etc.


I find this article really interesting. I am a native speaker of Spanish and the verb 'to be' is usually one of the first lessons we learn when we study English.

Spanish has to different verbs to depict the meaning(s) of 'to be' -> ser (exist) and estar (stay). I always thought merging those meanings into a single verb did not help to express the richness of the English language.


Within E-Prime some people think that using the verb "to be" for expressing stay, location etc should be allowed. For example: "The shop is over there" or "the cat is on the map" should be allowed. However I think the majority of E-Prime likers think that these should be excluded.


Starting a business might not have anything to do with being a good engineer


Then one has to accept the salary they get then.


The irony is that Ghostery is reporting 14 trackers on the page as I read the article...


I think the reason is fairly simple: even the most evident-no-doubt clear SQL Injection vulnerability found by a SCA tool may never be exploited at all under production (for instance because of a WAF). Then the obvious benefits of static analysis are not that obvious for your employer.

Sometimes we forget companies do not want a perfect code or the best possible well designed software but a product that make them earn money.

My experience is that developers only use those kind of tools if they are forced to by their QA managers of bounded by contract. Programmers usually don't want to fix or track bugs.


So, you are saying "Developers don't want to deliver quality."?

If that it true, than I don't want to work with them.


I am saying "Developers do not want : 1 - Pay A LOT of money for advanced solutions that are more than AST checkers (hello SonarQube) or big piles of false positives. 2 - Add overhead to their workflows (more than an IDE plugin is harmful, and what happens with those devs not using an IDE?). 3 - Spend time on figuring out if the static analysis results make sense or not, one by one.

A typical SCA tool can report hundreds or thousands of occurrences for a certain code base. How are developers going to deal with them?


I am from engineering background and not soley a software guy, so forgive me my different view on this topic.

I learned, that every error you can fix early on will cost you about 10x to fix in the next stage.

All the new principles like Agile have not changed that.


I think the idea is not that it's not worth to fix errors as soon as possible (which it is), but that static analysis tools provide too many false positives and too many non-errors to be useful.


I guess you can combine the points: if you use static analysis from the start / have it configured right then the amount of false positives should stay relatively low.


Gosh I'm the opposite. I do consulting work and whether my clients want it or not the work I provide them utilizes several code analysis tools: findbugs, cobertura, checkstyle, and PMD. I simply don't write code without those tools present if it's even remotely reasonable to do so.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: