Hacker News new | past | comments | ask | show | jobs | submit | bippingchip's comments login

Same here. What’s this about? Anyone who can see willing to share some more info?

Just generates the chess set based on GenAI

Oh yeah that's indeed too dangerous for my country, thanks Google!

Maybe "Thanks, your country"?

No, AI chess is not banned here, but I suppose they - even in this app - collect every data possible and use it against the user - then yes, that is banned and this is not a fault of my country.

These gate and metal pitch numbers don’t tell the whole story. In the end it’s logic gate density that counts.

And while decreasing the gate and metal pitch, also the logic gates have shrunk to be smaller (typically expressed by measuring the height of a gate in amount of metal tracks) from 9tracks down to 6tracks.

Changing the transistor from planar to fins, and now hopefully to ribbons with eventually stacked pmos and nmos are a big enabler.

That said, we’re still not hitting the ideal scaling numbers. We’re just doing somewhat better than what’s suggested by poly and metal pitch only.


This looks like a treasure trove on what it takes in terms of algorithms to enable tools like Cadence Innovus or Synopsys ICC. It’s not a user guide on how to use these tools, but rather a perk behind the curtain.

I’ve worked with Andrew, one of the authors on occasion in the past, and he and his team of students are among the best academic teams in the world on this topic.

I do think a lot of the secret sauce lives as trade secret with Cadence, Synopsys, Mentor… They see all the real problems in designs from all their customers in bleeding edge nodes like 3nm and beyond.


That book is great. This one is also quite good.

https://books.google.com/books?id=EkPMBQAAQBAJ&printsec=fron...

Handbook of Algorithms for Physical Design Automation edited by Charles J. Alpert, Dinesh P. Mehta, Sachin

The information is available if one looks for it. It's a tough subject though.


Thank you for the write up! Very enlightening to see how things went for you. I was on the fence recently whether to go for contractor work but ended up just switching companies in the end. In part because I didn’t know what to expect. Articles like these hwould have helped!


If you want to explore what consulting looks like without the risk of going it alone, consider joining a company that does consulting. It's a good opportunity to learn about the sales cycle as well as get a taste for what sort of challenges consulting provides while still being a FTE with benefits.


This is useful infotmation to learn, but I can tell you from having done all three that consulting for a shop is as different from freelancing as it is from an in-house job.


I guess it depends on how close you are to the sales process. I have had a chance to get a lot of experience working on proposals from a tech point of view and am involved in the pitch and negotiation. There is definitely carryover from those activities. But if you are consulting as a heads down IC and leave other aspects of projects and bids to the “business people” I can see where you may not learn as much that will be helpful as a freelancer.


I'm happy you liked it!


This is a somewhat tangential question to the new release, but there might be folks here that can answer this question.

Having used swig to create Python bindings for C++ code over 10 years ago, what’s the recommended way to do this in 2023? There’s still swig, there’s Cython, pybind11 and a few others. I do know swig and how complicated it can get with footguns abound when things grow more complex.

Is Cython the way to go? How does it hold up to the alternatives? Google search gives many articles on the topic, but many typical SEO optimized low-value, and those that do show a bit of depth, focus on the basic mechanics, not really on things hold up for larger projects…


I haven't used Cython too much, it does look really interesting but the translation layer worries me a little bit for more complex modules. However, I've been using pybind11 extensively and it's a delight. Well designed, documented, predictable, removes a massive amount of boilerplate, integrates perfectly well with C++ idioms (e.g., smart pointers), and doesn't completely lock you in, as you can still call the C API in a regular way.


Thank you! - I’ll give pybind11 a go.


Cython is the most general tool. It can be used to do anything from making bindings from C/Cpp to Python, Python to C/Cpp, to writing compiled “Python-like” code in an intermediate layer that can be used for managing your wrappers or just writing performant code.

If you just want the ability to provide a Python interface to a C/Cpp library PyBind11 will get you there in fewer LoC than Cython. Nanobind is an even lighter weight option.

I’ve heard Swig is a pain to use.


Thank you! Swig indeed can be a pain but having used it before I have become somewhat blind to it. But eg smart pointers are not easy to deal with well, I’ve found out recently… I’ll have a look at pybind11. I’ve worked on Cython codebases too, which indeed allows to really nicely compile Python code and interact with c code. It does get weird when using eg pyqt and native qt…


If all you need/want is to call c++ code from python then pybind11 is the way to go. Cython really comes into its own when you have some existing python code you want to 'port' to a C extension.


Thank you! For now I am just binding c++ to Python but I expect/fear the lines might start blurring, so cython might come in handy then.


It’s a little easier to write idiomatic python bindings for a C/C++ library in Cython IMO, because you’re writing the bindings in a language that’s almost python.


The problem with SWIG bindings I’ve used is that they don’t have any type hints. They also don’t offer context managers to handle resources, so it’s a pain to use safely in Python.

From the user POV, the best bindings I’ve seen were wrappers with a Python API that calls C++ using Cython.


If you just need a nice print: fmtlib is a really nice c++23 style implementation without needing c++23 compiler support. Highly recommend it. It’s simple. It’s fast.


Not really knowing Vercel, I thought, based on the title, that it might have been a new GPU competitor (accelerator card) but it’s a startup accelerator.


While I have no need for its online functionality and the SAAS part of plotly, I really do like plotly python + cufflinks [1]. It lets you make interactive plots in html/js format. Which means you can save the notebook as html, and while people won't be able to rerun the code, they can still zoom in on graphs, hover to see annotations etc, which is a really nice way to share the outcome of your work in a more accessible way.

[1] https://github.com/santosjorge/cufflinks


Cufflinks seems to be stale, maybe it is not needed anymore to bind plotly and pandas? I don't think these options existed in 2021 when cufflinks was last updated:

- Plotly can be used directly as pandas backend: https://plotly.com/python/pandas-backend/

- The plotly.express module makes it easy to create interactive plots in html/js format from pandas dataframes: https://plotly.com/python/plotly-express/#gallery


If you're interested in an easier way to create reports using Python and Plotly/Pandas, you should check out our open-source library, Datapane: https://github.com/datapane/datapane - you can create a standalone, redistributable HTML file in a few lines of Python.


I was wondering what the goal of the project is. The README is not very clear on it, but the implementation document [1] does state design goals:

- Small memory usage within a fixed-sized memory region — no mallocs

- Practical for small scripts (extension scripts, config files)

- Concise source — less than 1000 loc

- Portable ANSI C (Windows, Linux, DOS — 32 and 64bit)

- Simple and easy to understand source

- Simple and easy to use C API

[1] https://github.com/rxi/fe/blob/master/doc/impl.md


> - Portable ANSI C (Windows, Linux, DOS — 32 and 64bit)

I skimmed through the source, and aside from reading a file from STDIN to a `static char buf[64000];`, nothing in this seems to use the POSIX API. With that buffer trimmed to an appropriate length, it appears it could run on a microcontroller, which is always a useful thing to have.


The author developed some games. Maybe the goal of fe is to write game scripts.


A lot of companies are indeed trying to build AI accelerator cards, but I would not necessarily call them ASICs in the narrow sense of the word, they are by necessity always quite programmable and flexible: NN workloads characteristics change much much faster than you can design and manufacture chips.

I would say they are more like GPUs or DSPs: programmable but optimised for a specific application domain, ML/AI workloads in this case. Sometimes people call this ASIPs: application specific instruction set processors. While maybe not a very commonly used term, it is technically more correct.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: