Hacker News new | past | comments | ask | show | jobs | submit | astdb's comments login

I found a documentary[1] on Masahiro a while back and it was mesmerizing to watch him work.

[1] https://www.youtube.com/watch?v=ZTiPNqeMS8E


RangerBot is an advanced version of the 2015 robot, with a newer submersible design, handheld remote-control and computer vision to identify invasive species.

"In 2015, an early prototype of the robot "COTSbot" made international news. Now its successor the RangerBot is significantly more advanced.

The RangerBot is equipped with a vision system that allows it to "see" underwater while being operated using a tablet.

QUT Professor Matt Dunbabin said the robot used real-time vision to navigate and identify starfish."


Is the opinion pdf formatted with LaTeX? Couldn't help noticing the margins etc.


Others have asked, and despite some surface similarity the answer seems to be "no": https://www.reddit.com/r/LaTeX/comments/1galio/does_the_gove...

It doesn't address the LaTeX question, but this page had some interesting details on the style guide for Supreme Court opinions: https://lawyerist.com/style-guide-supreme-court/


How did catalogue/mail-order retailers handle this (presumably in business since before the Internet)?


This ruling overturned previous rulings that allowed mail-order businesses to avoid collecting sales tax outside of their state.


The standard evasion tactic for SR-71 upon launched against was to simply accelerate.


That would be the only option. It's not like you can turn much you'd pass out or snap the airframe.

I imagine thought a little snake left/right might compromise getting the images that was the point of the mission.


Apparently it could also turn pretty hard. Mind, it's turn radius was huge, but at mach 3.2 it still meant pulling a lot of g's.


https://us.teamblind.com/article/sharing-my-offer-numbers-fr... provides some insights into how to go about interviewing with top tech companies, preparation, offer info etc.


Shoutouts to Snap


Having done some work with the state-of-the-art of AI, I personally don't think AGI is near - might not even be possible. But the catch is the unreliability of (even expert) predictions on technology futures. My take is that it's worth taking pragmatic steps towards studying AI safety measures (i.e. OpenAI), but not going as far as to talk the likes of 'AI research regulation'.


> My take is that it's worth taking pragmatic steps towards studying AI safety measures (i.e. OpenAI), but not going as far as to talk the likes of 'AI research regulation'.

Sometimes, it makes more sense being cautiously optimistic (pro-active) rather than reactive. We have already gone down that reactive slope and it's better to act now before it's all too late [0].

[0] https://blogs.scientificamerican.com/roots-of-unity/review-w...


I think the kind of people who would fill these ai regulation roles would be pseudo technical bureaucratic types who would prove to have offered no value if sudden unexpected agi really did come about


Ignoring the topic of the linked article* I'd argue that there's examples of being too cautious as well. There's a lot of good that we could have done with GMO that is not being done because of very restrictive regulation. Ironically it means that GMO is mostly used for things that are not as obviously good because that's where there's enough profit to be made in the short term to make the research worth it.

I'm a bit afraid that this will happen with self driving cars and AI. That politicians will create draconian policies and laws to protect against the threat of AGI etc, without understanding or knowing what the real threats even are (just look att the trolley dilemma debate...). This could make it economically prohibitive to develop many technologies which has the potential to save many lives as well as improve life quality overall.

* It seems to be more about how rules and policies can be unfair and just to a small extent about how policies can be made opaque by being internal to some ML system.

There's a lot more money going into making plants resistant to pesticide than into making plants better adjusted for harsh conditions or more nutritious, things that could potentially have a huge effect for poor people.


If AI scientists actually believed that the general public will believe the talk about existential threats, they would be afraid of activist groups sabotaging and occasionally firebombing their laboratories. Like sometimes happens with GMO research. Clearly they are not.


> Having done some work with the state-of-the-art of AI, I personally don't think AGI is near - might not even be possible.

(Just venting here, not even primarily at you.)

360k babies are born each day. Clearly it is possible to reproduce intelligent machines. The only way it would be impossible to artificially do the same is literally if life was a magic, non-physical thing. I wish people who state things like this would also state any religious beliefs that lead them to think so.


Yes, we can find a mate and create a baby. We can't know whether it will be ready to fill a particular functional role after 20 years of training. This works OK for an entire society filling its workforce or army, but seems rather inadequate for a technology company to deliver working products on spec and within a reasonable contract delivery period.

If this is the basis of future AGI, I have to wonder if which flavor of dystopia we'll get to enjoy. Will it be a child-selling dystopia where we all raise a dozen kids hoping that some of them will pay off? Or more like silk-farming, where some capitalized breeder sells kits to all the villagers, and buys back the developed products if and only if the villager was lucky enough to raise them to fruition?

Also, if a human baby is our only basis for assuming AGI, then we ought to think about it like genetic-engineering or human augmentation. We'd better anticipate providing schools, hospitals, psychiatrists, courts, and prisons to deal with the wide variety of behaviors and misbehaviors which will come with these new products which have so little determinism as a baby's lifecycle.


This is hilarious. (I mean, it was intended as a joke, right?)


Just because we can observe something happening, doesn't mean we can understand the mechanisms of how that thing happens and even if we CAN understand the mechanism, it doesn't mean the mechanism is feasibly reproducible within our resource capability.

An example might use crypto... you observe random information flying through the air, you may recognize it as an encrypted channel and you may see a machine acting in response to this encrypted signal.

With enough observation you may be able to mimic the encrypted signal to get the machine to act in a certain way, but you haven't decrypted the actual signal (and can't ever, if you believe in strong crypto) and can't ever say with any certainty you know the full scope of communication taking place or the capabilities of the machine you've been observing.

At any point you can make your own version of the machine mimicking the language tuning it to be an exact replica of the original and even responding to the original signal. Yet, is it truly a copy of the original?


You're missing the point several times over. The entire point is very simple: if there is no magic involved it's down to physics. If it's possible to create brains without magic, it is possible to do so artificially.


I don't think I am... if a task is possible, but takes 100x the lifetime of the universe to accomplish, is it actually "possible"? Or is accomplishing that task the same as "magic"?

It I send a message using a one-time pad, the other person knows what I sent, you can ask questions and see that communication is actually happening... so if we're not using magic to communicate, it must be possible to communicate in the same way, right?

Yet it's mathmatically impossible to do so without access to hidden information (shared key)... no laws says that you can access that information, even with all the computational ability available in the universe.

The mechanisms and communication patterns of consciousness are similar, if it takes till the sun explodes to train a true AGI then aren't we just getting pedantic about what is possible/magic?


That doesn't really make much sense to me, I'm afraid. Please bear with me, but my first guess is that you have invented a coping mechanism that allows you to deal with conflicting information in your mind. Are you by any chance religious?

(I figure I should be allowed to ask such a normally speaking quite loaded question because of my previous statements, up the thread.)


No, I am not, by any chance, religious. I don't rule out the possibility of a God or God-like intelligence existing in the Universe, but that's simply as a result of the "absence of evidence is not evidence of absence" principle I hold to.

I don't really see how religion factors into this though... I feel like I'm talking about a simple concept too. If I show you an encrypted message and show you that other people can read the contents with a key, then ask you to read it without the key, why can't you? It's not magic, it's math.


Well, my apoligies for being presumptuous then.


Your first guess? You don't get to dismiss peoples' arguments as superstition just because you don't understand them.


I don't think anyone is saying there won't be human level AI 500+ years into the future. Like you said, it's not against the laws of physics or anything.

The question is, will it happen in less than 50-100 years, or would we be like medieval alchemists rushing to outline the first nuclear weapons treaties, right after they have just invented black gunpowder.


Concorde (or the Tu-144) could've kept up with the shadow


They did this on June 30, 1973. While it stayed in totality 10x longer than would have been possible on the ground, it did not stay for the entire path of totality. Look at the umbral velocity - only the bottom of the parabola is obtainable by even a Concorde.

http://xjubier.free.fr/en/site_pages/solar_eclipses/TSE_1973...


These are photos from Cassini spacecraft's set of 'Grand Finale' manoeuvres around Saturn - this article has a good summary http://www.abc.net.au/news/2017-04-28/cassini-sends-back-clo...

The images from the initial link are the newest raw images taken during the flyby which happened about two days before. Possibly the closest images ever taken of Saturn.

Google did a great doodle too https://www.google.com/doodles/cassini-spacecraft-dives-betw...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: