Hacker Newsnew | past | comments | ask | show | jobs | submit | more azag0's commentslogin

I don't really see why those two should be compared? Pandas is a Python library for programmatic work with tabular data. Visidata gives you a live view of a file storing tabular data.


Thanks for clarifying, that makes sense


Ok, that's financial liability. What about criminal liability? What if the car breaks rules and kills someone. Will the programmers be responsible? The CEO? Can companies be criminally convicted in the US? Or will it be on par with other "machine accidents", where there is typically no criminal responsibility?


I don't understand this obsession with finding someone to hold responsible for an accident before the accident happens. When an accident with self-driving cars actually happen, if someone is found to have committed a crime, that person will be charged.


If you run a red light and kill someone, it's pretty clear who is responsible though. If a driverless car runs a red light because of a hardware defect, who is? No one? Or do you think there will be a committee trying to establish what exactly went wrong and where? With planes, when autopilot does something stupid, Boeing/Airbus will ground all planes of the same type until they can figure out what the issue is. I can't imagine we will ever hold cars to the same standard as we hold airplanes, that's just impossible.


We already have a framework in place to deal with this.

If your car's brakes fail and you run the red light because you had no way to stop, you are not at fault. Neither is the engineer that developed the braking system, unless they were criminally negligent.

If the software that runs the red light fails and gives everyone a green light and you run it and get in an accident, you are not at fault, and the person who wrote the code will not be going to jail, again unless they are found to be criminally negligent, or malicious.

If a systemic fault is found at any point, a recall will be done to remedy it.

This isn't some new class of problem here, this is the same old thing we have been dealing with for many years. You already rely on many many people doing their job correctly every day to not die, this is no different.


If your car's brakes fail, it's probably a mechanical fault due to natural wear and tear, for which it is easy to avoid assigning blame. If a car's autonomous capability spectacularly fails, it's far more likely to be due to a bug at design stage. Many of these bugs are likely to cause vehicles to behave in ways human drivers would certainly be prosecuted for, and we really haven't got a whole lot of precedent for auditing responsibility for bugs in complex software performing varied life-threatening interactions with other humans which replicates tasks humans themselves perform imperfectly and are frequently prosecuted for negligence over.

It would be a strange system which punishes humans for individual serious driving flaws but merely requires them to remedy it for introducing systematic serious driving flaws.


Sometimes, nobody should be held responsible. It's called an accident.


And if nobody is held responsible, there is no risk for a company to put shoddy software on the street.


At which point they've crossed the line into a crime, so liability is a crime. These concepts are already in play in other industries that use software on devices that could harm or kill.


good luck proving their software is defective


You mean other than, "Your vehicle software is defective and unsafe and therefore banned for public road use by NHTSA as non-conforming to federal safety standards"?


In other words... holding them responsible.


A machine that damages property or causes injury via its own ineptitude is defective, not accidental. Hitting a deer that jumps out from roadside cover is an accident. Plowing over a bicyclist in a well lit environment isn't. There is going to be a lot of the latter with self driving cars.


In the Eurozone, SEPA transfers are free. Outside Eurozone, but within EU, there are typically small fees.


In the same way that there is no formal theory for the exact shapes of proteins? I think it’s possible. But as with proteins, there are probably some general aspects of the problem that can be explained in simpler terms.


In this regard, it is similar to how natural sciences are done. The hyperparameter space of possible experiments is immense, they are expensive, so one has to go with intuition and luck. Reporting this is difficult.

[edit:] In this analogy, deep learning currently misses any sort of a general theory (in the sense of theories explaining experiments).


In this regard, it is similar to how natural sciences are done. The hyperparameter space of possible experiments is immense, they are expensive, so one has to go with intuition and luck. Reporting this is difficult.

I'd agree it's done in a sort-of scientific way. But I don't think you can say it's done the way natural science is done. A complex field, like oceanography or climate science, may be limited in the kind of experiments it can do and may require luck and intuition to produce a good experiment. But such science is always aiming to reproduce an underlying reality and the experiment aim to verify or not a given theory.

The process of hyperparameter optimization doesn't involve any broader theory of reality. It is essentially throwing enough heuristics at a problem and tune enough that they more or less "accidentally" work.

You use experiment to show this heuristic approximation "works" but this sort of approach can't be based on a larger theory of the domain.

And it's logical that there can't be a set theory of how any approximation to any domain works. You can have a bunch of ad-hoc descriptions of approximation each of which works with a number of common domains but it seems logical these will remain forever not-a-theory.


Exploring even a tiny, tiny, tiny part of the hyperparam space takes thousands of GPUs. And that is for a single dataset and model---change anything and you have to redo the entire thing.

I mean, maybe some day, but right now, we're poking at like 0.00000000001% of the space, and that is state-of-the-art progress.


A DNN might be more effective at exploring the hypyerparameter space than people are with their intuition and luck. Rumor is Google has achieved this.


Google simply has the computational resources to cover thousands of different hyperparameter combinations. If you don't have that, you won't ever be able to do systematic exploration, so you might as well rely on intuition and luck.


This is not accurate. Chess alone is so complex, brood force would still take an eternity, and they certainly don't have a huge incentive to waste any money just to show off (because that would reflect negatively on them).

But how does it work? It's enough to outpace other implementations, alright. But the model even works on a consumer machine, if I remember correctly.

I have only read a few abstract descriptions and I have no idea about deep learning specifically. So the following is more musing than summary:

They use the Monte Carlo method to generate a sparse search space. The data structure is likely highly optimized to begin with. And it's no just a single network (if you will, any abstract syntax tree is a network, but that's not the point), but a whole architecture of networks --modules from different lines of research pieced together, each probably with different settings. I would be surprised if that works completely unsupervised; after all it took months from beating go to chess. They can run it without training the weights, but likely because the parameters and layouts are optimized already, and to the point of the OP, because some optimization is automatic. I guess what I'm trying to say is, if they extracted features from their own thought process (ie. domain knowledge) and mirrored that in code, than we are back at expert systems.

PS: Instead of letting processors run small networks, take advantage of the huge neural network experts have in their head and guide the artificial neural network into the right direction. Mostly, information processing follows insight from other fields, and doesn't deliver explanations. The explanations have to be there already. It would be particularly interesting to hear how the chess play of the developers involved has evolved since and how much they actually do understand the model.


I'm curious why you believe to be able to tell that my comment is not accurate when you yourself admit that you have no idea about deep learning?

Note that I'm not saying that Google is doing something stupid or leaving potential gains on the table. What I'm saying is that their methods make sense when you are able to perform enough experiments to actually make data-driven decisions. There is just no way to emulate that when you don't even have the budget to try more than one value for some hyperparameters.

And since you mentioned chess: The paper https://arxiv.org/pdf/1712.01815.pdf doesn't go into detail about hyperparameter tuning, but does say that they used Bayesian optimization. Although that's better than brute force, AFAIK its sample complexity is still exponential in the number of parameters.


Your comment reminded me of my self, so maybe I read a bit to much into it. Even given googles resources, I wouldn't be able to "solve" chess any time soon. And it's just a fair guess that this applies to most people, maybe slightly fewer percent here, though, so I took the opportunity to provoke informed answers correcting my assumptions. I did then search papers, so your link is appreciated, but it's all lost on me.

> they used Bayesian optimization. Although that's better than brute force, AFAIK its sample complexity is still exponential in the number of parameters.

I guess the trick is to cull the search tree by making the right moves forcing the opponents hand?


I think you are confused about the thing being optimized.

Hyperparameters are things like the number of layers in a model, which activation functions to use, the learning rate, the strength of momentum and so on. They control the structure of the model and the training process.

This is in contrast to "ordinary" parameters which describe e.g. how strongly neuron #23 in layer #2 is activated in response to the activation of neuron #57 in layer #1. The important difference between those parameters and hyperparameters is that the influence of the latter on the final model quality is hard to determine, since you need to run the complete training process before you know it.

To specifically address your chess example, there are actually three different optimization problems involved. The first is the choice of move to make in a given chess game to win in the end. That's what the neural network is supposed to solve.

But then you have a second problem, which is to choose the right parameters for the neural network to be good at its task. To find these parameters, most neural network models are trained with some variation of gradient descent.

And then you have the third problem of choosing the correct hyperparameters for gradient descent to work well. Some choices will just make the training process take a little longer, and others will cause it to fail completely, e.g. by getting "stuck" with bad parameters. The best ways we know to choose hyperparameters are still a combination of rules of thumb and systematic exploration of possibilities.


Google did do this. It took ungodly amounts of computing power and only did slightly better than random search. They didn't even compare to old fashioned hill climbing or Bayesian optimization.


I've considered gradient descent for optimizing parameters on toy problems at university a few times. Never actually did it though, it's a lot of hassle for the advantage of less interaction at the cost of no longer building some intuition.


In the Czech Republic, it's pay slips/bank statements, or a work contract for mortgage (you cannot fire someone without a reason). There is a central government database of people who didn't pay a debt on time.

In Germany, also pay slips and work contracts. Additionally, there is a semi-private government-mandated company (I never understood why Germans don't complain about this) that maintains a database of existing debtors. You basically start with a perfect score and it decreases only when you don't repay on time.


There is a gigantic difference between who can get mortgages and at what kind of leverage one can get in Europe vs. United States.

Completely private and manual underwriting exists and always existed in the United States. The reason that it is not popular is that the vast majority of Americans would not qualify under any private underwriting guidelines for most of the loans Americans get ( and successfully pay for ).


Is this around Bernauer Str or also somewhere else? I've only ever seen it there.


Yeh exactly


> though sometimes there was a thin strip of East Berlin strictly speaking

There were also secret doors in the wall, and on several occasion people from the western part who ended up on this thin strip adjacent to the wall were "kidnapped" by the stasi to the east and later usually traded for eastern agents captured in the west.


[flagged]


I did seem pretty outlandish to me, but it seems to have happened occasionally: https://www.welt.de/kultur/history/article13519471/Als-ein-w...

A less detailed article in English about the same event: https://www.theguardian.com/artanddesign/2014/nov/03/east-ge...


okay your google-fu seems to be stronger than mine. At least all elements mentioned in the original comment are there even the stasi-IM.


I don't really remember where have I heard about this. It definitely wasn't an article online. It might have been the museum at Bernauer Str.


Came to say the same. I’ve always assumed the target audience of these machines is just video editing?

What are the requirements on running, say, TensorFlow? Isn’t it just the GPU there?


I worked in bioinformatics at a Max Planck Institute during my studies, and made good use of a Mac Pro. Some of the tools we used were interactive, such as 3D-modelling. Many other tasks, such as DNA sequence searches (BLAST), just happened to involve datasets that were too large for most notebooks, but manageable with the 32GB or RAM the Mac Pro had.

We did use a compute cluster, but using it drastically reduced the turnaround time during development in my experience.


You can't run tensorflow on it because it has an AMD GPU.


Tensorflow has been supported on AMD GPUs since at least November.

https://www.anandtech.com/show/12032/amd-announces-wider-epy...


No, AMD has claimed that tensorflow works for them since 2016 IIRC, but it's not exactly correct.

The main tensorflow branch does not support AMD GPUs at all.

There's a bunch of experimental code by AMD that possibly works in certain conditions, but is neither properly supported nor widely used. For example, there's hiptensorflow which seems to be a TF port stuck on 1.0.1 (TF is now up to 1.5, with many major releases and key changes), not being actively developed (the last commit seems to be at the end of 2017) and core parts of it (ROCm) don't work in OS X (tensorflow as such runs just fine on OSX, just CPU-only, which is a major performance limitation compared to any machine with a good nvidia gpu), and thus are not that relevant for discussing Apple systems. There are also some OpenCL port efforts, none of which are stable, up to date, and actually usable to get the expected performance improvements. All these projects are interesting, promising, difficult, and their developers deserve all the best, but calling them "Tensorflow has been supported" is an extreme exaggeration, PR vaporware.

Proper support is not there yet. I believe that there are likely ways to jury-rig some code together and run it on AMD hardware, but that's not comparable to the maturity level of these tools on nvidia (essentially, run the install, and it works). AMD still has to do their homework and fulfil their promises. They've spent far too long saying that the software support is coming soon, and now their claims have to be taken with a grain of salt. They can hope that the community will take over effective development a year or two after they do that and people in the ML scene start widely using their hardware, but not earlier. Currently, even if you have a nice AMD GPU, it's cheaper to just buy another high-end GPU from nvidia rather than spend a week of engineer-time to possibly maybe get their shoddy software to do what you need.


Sorry, this is not just a box that you tick off. The CUDA code paths in Tensorflow have been optimized and stabilized for years. Why would anyone drop 5000-10000 Euro on a Tesla-class GPU and get a subpar experience when you can just buy NVIDIA?

AMD has a lot more work to do in this area before people will seriously consider them in GPU computing.


I guess it's not really that surprising, but recording a 4K display eats about one and a half core on Core i5, 2.7 GHz.


Is this with the real Mac RetroClip? That sounds really high, like way higher than expected. Is it an older machine without a hardware video encoder?


Macbook Pro, 13-inch, 2015, Intel graphics 6100. So I guess not. The 4K display is connected via displayport, and runs downscaled at 2560x1440.


OK, I would expect that setup to perform really well (like just a couple % CPU use in activity monitory). If you have a chance, can you send a sample[1] to support at realartists.com? I'd love to see what's taking all that time.

[1] https://developer.apple.com/legacy/library/documentation/Dar...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: