Hacker News new | past | comments | ask | show | jobs | submit login

I work on this team! (Specifically: applied deep learning research, chip design).

It's a shame to see so many people dismissing this work as marketing. I see lots of clever people working hard on really novel and interesting stuff, and I really do think that ML has real potential to customize a design much more "deeply" than traditional automation tools.




This is directed at AI marketing in general: "AI" has been used to market so much nonsense it's probably becoming a problem communicating actual interesting uses of AI. I very much get a dot com vibe off it, like nobody on the team knows how it works but we're sure we're gonna be rich somehow! In my head, I've begun substituting AI with "wizards" when I read it.

It's very much the sort of problems crypto is having. So many grifters actual interesting uses of the technology are very hard to identify and take seriously.


I guess so, but the fact of the matter is that ML/AI is actually, right now, doing useful things that would have been impossible 10 years ago. I don't think I could say the same about crypto (as distinct from cryptography).


> "AI" has been used to market so much nonsense it's probably becoming a problem communicating actual interesting uses of AI.

On the other hand: if the people who do serious work in this area don't call out this nonsense, they must accept that their (serious) work becomes devalued.

> It's very much the sort of problems crypto is having. So many grifters actual interesting uses of the technology are very hard to identify and take seriously.

Here, the same holds.


It's a lost cause, you have to pick a new word.

"Diet" was used for women's food product to the point that men didn't want to buy anything with "diet" on it, so CocaCola created coke zero just for men instead of trying to make them drink "diet" coke. They knew it was a lost battle.


I think it's funny how "the old AI" had combinatorical optimization as a major theme, for instance

https://en.wikipedia.org/wiki/Travelling_salesman_problem

which is closely related to the central operation of logic, the canonical NP problem

https://en.wikipedia.org/wiki/Boolean_satisfiability_problem

as well as the playing of games like Chess, Poker, etc.

Modern neural networks also have optimization as a theme even when the output is a classification or something that doesn't look like optimization... That is, the network itself is trained to minimize an error function. People used these kind of algorithms back in the 1980s to layout chips

https://en.wikipedia.org/wiki/A*_search_algorithm

and it's only natural that new techniques of optimization (both direct and through heuristics like the neural network used in AlphaGo) are used today for chips.


Yes, the way I see it, one of the major benefits of deep learning is that it lets you define functions (in the R^n -> R^m sense) that would be basically impossible to define with traditional programming techniques. I think this comes up a lot in subroutines of combinatorial optimization, like heuristics for guiding search on subsets of NP-complete problems. The fact that you can automatically evaluate the heuristic and train by RL is also very convenient.


It is the same with a lot of the machine learning stuff posted here the 2nd or 3rd comment is that how it could be achieved with normal algos etc. But slowly as more people start applying to different problems machine learning is solving many of them.


To be fair, Nvidia does a lot of “selling” when they’re basically making money from crypto and CUDA monopoly.


When I briefly used Cadence's stuff I always thought about how fixing DRC errors could be crowdsourced as an "idle game" because it's so puzzle-like. The other thing was how it's even slower than Vivado...

Using RL to automate DRC fixes, and modeling standard cells as graph/flow problems are things I'd love to learn more about. What papers would you recommend reading to get started (for a grad student already familiar with machine learning basics)?


A quick tangent if you have time and can discuss it: some really interesting, effective and odd antenna designs came from AI:

https://ti.arc.nasa.gov/m/pub-archive/1244h/1244%20(Hornby)....

Have there been any odd, surprising or wildly efficient chip designs that have come out of the AI designs?


I saw the in-depth presentation at DAC. Until your company is willing to actually release your work, it's marketing.


Any word on how the accuracy/quality of final results compare to traditional flows? Are process variations handled differently (with regards to training or modelling) compared to IR? I assume traditional vendors (CDNS/SNPS/MENT) all have (or working on) AI driven tools as well. How do they compare?


In general, a function approximation solution like deep learning does worse on cases where exhaustively finding the exact optimum is possible (small combinatorial problems), but can be applied to much larger instances than the exact algorithms can.


Ha, this does sound awesome. Are you guys hiring?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: