Hacker News new | past | comments | ask | show | jobs | submit | alted's comments login

Ignoring flexibility and cost/performance, this may be a sign that rapid chip fab turnaround times are possible. These were made by Pragmatic Semiconductor [1], who claim they can make chips within 48 hours and deliver within 4 weeks (likely due to their use of unconventional materials). Traditional silicon fabs, including trailing-edge foundries and TSMC, take 2-9 months. I do wish they'd emphasized this instead of flexibility.

[1] https://www.pragmaticsemi.com/


Yeah but traditional silicon fabs aren't making 12k gate chips.


What's the turn around time on 12k gate chips from a traditional fab?


Just googling really quick: the Lattice Semiconductor LFE5U-12 is a 12k-LUT FPGA (and a LUT is way more flexible than a gate).

So realistically, if you need fully custom digital logic, you'd buy LFE5U-12 instead and program that.

So that's $16 FPGA from widely available distributors (like Digikey) who likely can afford 1 or 2 day shipping.

-------------

Custom chip design for a flex-circuit is interesting, but only if you have substantial analog parts that cannot be easily implemented by an FPGA.


The power profile of an FPGA is very different (worse) than dedicated silicon. Both the background current and cost of a gate switch. There are a lot of situations where that isn't acceptable.


I suspect the power profile of an FPGA is better than this flex-silicon...


True, that's definitely possible. I was giving the habitual answer (that I occasionally need to explain to a client at work) why you need to spend $2m having an ASIC made when "it's just digital logic and we got the prototype going in a month"

They usually care about at least one of - size, power, cost.


Has anyone used Perforce Helix for this?

If you're already using it for large file version control for, e.g., gamedev, and don't mind the cost, how well does it work to store all other company documents? I'd assume it has better scalability and permissions management than Nextcloud (not to mention the version control on par with git).


it works quite good. Used it in my previous gig. All documentstion and design libraries of the company (of 100+ people) were in Perforce. Everything was on AWS.

It's a joy to browse the design library and do code reviews with Swarm as a plus.


Custom state-of-the-art silicon is ridiculously expensive.

For a minimum 100 wafers = 10k chips, Groq may have paid $100M = $10k/chip purely in amortizing design costs.

Chip design (software + engineer time) and fabrication setup (lithography masks) grow exponentially [1][2] with smaller nodes, e.g., maybe $100M for Groq's current 14nm chips to ~$500M for their planned 4nm tapeout. Once you reach mass production (>>1000 wafers, which have ~150 large chips each), wafers are $10k each. On top of this, it takes ~1 year to design then have prototypes made. (These same issues still exist on older slower nodes, albeit not as bad.)

This could be reduced somewhat if chip design software were cheaper and margins were lower, but maybe 20% of this cost is due to fundamental manufacturing difficulty.

(disclosure: I don't work with recent tech nodes myself; this is my best guess)

[1] https://www.semianalysis.com/p/the-dark-side-of-the-semicond... [2] https://www.extremetech.com/computing/272096-3nm-process-nod...


> Custom state-of-the-art silicon is ridiculously expensive.

Think about the amount of money being dumped into "AI" at this point. If you've got the technology and people to make stuff faster/better/cheaper, finding investors to dump money into your chip making business is probably not as hard as it was 2 years ago.

Groq is making this change for other reasons than the expense of tapping out chips.


The report I read said that latest TSMC is 17K per wafer. How much less it is for 14nm I don't know.


The masks are the expensive part, not the wafers.


They are both fabulously expensive.


i don’t support hardware development directly, but i’m a software infrastructure engineer working adjacent to the teams that do so.

can’t comment on specifics, but imo our hardware team punches above its weight class in terms of # of people and time spent in design.


Pragmatic is very impressive because they're a startup that (a) is building their own chip fab and (b) said fab is much faster and cheaper than existing fabs.

They claim [1] to be able to make chips from a new design in only ~4 weeks compared to the usual 3-6 months required by anyone else, which is huge for R&D. They manage this by using an unusual relatively low-performance process (which is also what allows them to use plastic substrates), but it's arguably a worthwhile tradeoff (in return for slower larger transistors, you get significantly lower equipment cost and lead times). That the chips are flexible is almost an afterthought, I think, albeit a nifty one.

They've also announced efforts [2] toward open-sourcing their PDK, joining the growing movement toward open source chip design.

[1] https://www.pragmaticsemi.com/foundry [2] https://www.pragmaticsemi.com/newsroom/blogs/democratising-i...


Plasticity is interesting because it is maybe the only way to run the Parasolid geometry kernel natively on Linux right now.

Parasolid, the library used to perform the geometric operations (the most difficult and important part of a CAD program) also powers the likes of SolidWorks (the industry standard), NX, and Onshape, and is arguably the best in the world. Its licensing cost is presumably a large part of the Plasticity price.


Why is it difficult (geometric operations)? Do you have any suggestions on how to learn more about it?


There are two aspects of CAD that are very technically complex: parametric modelling and constraint solving.

Parametric modelling is similar to 2D vector graphics formats in that instead of defining where vertices are placed in a coordinate space, it builds the model based on an instruction set that includes primitive shapes like circles but can also include complex curves defined using splines (NURBS)[0].

Constraint solving is a way of mathematically deriving the possible shapes an object can take based on the geometric constraints applied to it. For example, a 2D equilateral triangle could be defined by setting the length of one edge and then constraining all edges to be of equal length. The coordinates of the vertices are derived from these constraints.

SolveSpace [1] is an open source parametric modeler with a constraint-based solver that you can explore if this is something you're interested in.

[0] https://en.wikipedia.org/wiki/Non-uniform_rational_B-spline [1] https://solvespace.com/index.pl


SolveSpace is really a joy to use. Simple, fast, capable.

It’s been a white since I used it, though. I use CAD pretty infrequently these days, though, and generally just let the mechanical engineers do it (or maybe do it myself in FreeCAD).


I think it is like writing an OS. It's not hard[1], but there is little value in a novel OS that isn't Linux x86_64 or Win32/x86 ABI compatible. From [2][3]:

> The kernel market currently is dominated by Parasolid and ACIS, which were introduced in the late 1980s.

> Autodesk ShapeManager is a 3D geometric modeling kernel used by Autodesk Inventor and other Autodesk products that is developed inside the company. It was originally forked from ACIS 7.0 in November 2001,

... so it's quite like OS kernels. There's WinNT, AT&T UNIX, couple advanced forks of UNIX, and GNU/Hurd.

1: very broadly speaking, to me it could take rest of my life

2: https://en.wikipedia.org/wiki/Geometric_modeling_kernel

3: https://en.wikipedia.org/wiki/ShapeManager


> 1: very broadly speaking, to me it could take rest of my life

Perhaps coincidentally the rule of thumb is that it takes about 100 developer-years to go from zero to a viable kernel.


Meh. Geometry kernels are oversold. Triangulate, CSG and triumph. CSG is the non-trivial part - and Manifold (https://github.com/elalish/manifold) is excellent at this.

At least for consumer stuff where you anyway eventually just want triangles for either rendering or 3d printing.

If your industrial workflow includes CNC machines you may benefit from actual analytic surface cuts and unions - maybe.


Excellent question! A sufficiently advanced battery can theoretically beat gasoline.

Any given energy storage technology can store a maximum amount of energy in a fixed volume or mass. Behold one of my favorite plots: [1]

From lowest to highest energy density:

- springs, which use mechanical elastic potential energy, are kinda horrible

- capacitors, which use electric permittivity, aren't great

- next are both batteries and combusted fuels, which both use chemical reactions.

- nuclear gets us another few orders of magnitude

- finally, antimatter (E=mc^2) is a ways beyond that

Both batteries and fuels rely on the energy difference between unreacted molecules, so their theoretical energy density is the same. Well, actually, fuels are burnt to create heat which is converted to energy, and this heat->energy conversion is fundamentally thermodynamically inefficient (only ~tens of percent), whereas batteries are the same sorts of reaction but much more controlled. A sufficiently clever battery, which moves atoms around to react in the right places at the right time, is thus more efficient and thus energy-dense than fuel. However, moving atoms around like this to make a more efficient battery is much more advanced nanotech than what we currently have. But it's theoretically possible.

This is what biology does: us humans are powered by chemical storage (sugar/fat/glucose), which is used more efficiently than current batteries but without combustion. (lithium-ion is ~0.8 MJ/kg, glucose is ~16 MJ/kg, gasoline ~46 MJ/kg)

[1] https://en.wikipedia.org/wiki/Energy_density


That is indeed a fun chart.

One thing I wanted to add is that fat (lipids) are much more energy dense than glucose. ~38 Mj/kg, though I am not sure what fraction of that the body recovers. Which makes sense, you want to maintain your long-term storage in a denser format.


Fuel(gasoline, carbohydrates, fats, etc. but not rocket one) does use oxygen from the atmosphere.Batteries are self contained (like rocket fuel).

So the density of chemical reactions is by definition higher.

Side note: energy density should apply to volume, not weight, but we'll - it is too common now.


What makes one chemical more able to store a greater energy per unit mass than another? Wouldn't the theoretical limit be a volume of pure electrons compressed in the densest unit volume possible? Say, stored in a magnetic field?


Compressing neon gas won't do much, aside storing energy as compressed fluid.

My point was the traditional fuels (incl. the edible ones) use more material/weight than their own. So it is very likely they'll be more efficient. The batteries require a reversible action by just applying current - this is quite the climb compared to most chemical reactions.

We have not done much since the li-ion inception, using FePO4 instead of cobalt is more sensible from an economic point of view but the energy density is even lower.


A disappointing fact of chip fabrication is the minimum bar is high and expensive.

In other fields, a hobbyist can do wood/metalworking or learn programming or build a robot kit. There's an onramp for people to start learning the skills, which makes a huge ecosystem of gradually improving talent.

But in microfabrication, even though it's the only way to make chips, screens, cameras, inkjets, and LEDs, the minimum equipment cost is millions of dollars. Even worse, it takes even professionals months to fine-tune a manufacturing process to make a new thing.

As a result, R&D is much lower than it could be, and most fabrication is limited to circumstances with a high chance of mass production payoff.


Well...

You can learn a lot by programming FPGAs.

In theory, while I think you could build an SSI manufacturing device at home if you really wanted to -- just how many people build an internal combustion engine themselves to learn about it?


Hopefully the people who post on hacker news. I just bought a book on steam boiler operation and another on building and maintaining internal combustion engines; so, me!


What are you going to do with the steam boiler ?


It's a research project, I am mostly self taught and I wanted to build a small archive of materials that would be enough to help a human survive the apocalypse. Farming & Animal husbandry, Metallurgy, Manufacture of goods, Pumps/engines, Electricity, a few other things.

My project is to index the volumes in the next few years and starting some of my own projects, so probably just doing some repairs on small things to start and moving on up!


> just how many people build an internal combustion engine themselves to learn about it?

At least 30 kids in Transport class when I went to high school, more if you count the kids in years before and after.

Not a full build, mind you - only some parts were cast, but we did do full strip downs and rebuilds of six cylinder ICE's as well as having a cut-away display engine in the shop.

My son did better (IMHO), he got to build an entire light aircraft over three years as part of aviation in a public high school in Western Australia. He missed out on the ICE building though.


This, but its really the logic function versus actual device fabrication. I would love to see a home device that would let you build you own transistors and perhaps a very small number of passives.

Consider a system that had 6mm diameter "blanks" and a tool that would let you put down an epitaxial layer, lay down, expose, and then etch a resist layer, etc. At those small scales, one could hope the chemistry would be manageable.


I think the required cleanliness goes far beyond what's easily doable with a kit.


What is the smallest feasible cleanroom for such a purpose? Can we fit a turnkey chip fab in a shipping container, hermetically sealed upon manufacture, fully automated, even pre-stocked with wafers? Drop it on a concrete pad, fill up the chemical tanks, send a spec, you have a wafer bonded to a standard package in X hours


A modern chip has one of the longest supply chains of any manufactured goods in the world. Even the biggest chip companies get the wafers made in one place, and then send them somewhere else for packaging and bonding.

It's really hard to see what benefit there might be to trying to cram all of this into a shipping container, other than to say you did it.


Cleanliness requirement is a function of feature size. For a process that has 5 µm features (that is roughly 250 times larger than current fabs) its pretty manageable. That is further improved if you use the 'cannister' technique where the "wafer" is in a sealed canister that is opened when inserted into the machine and closes automatically when pulled out.

I am envisioning a handle which ends in stainless steel holding mechanism that has the feature that when you slide it into a 'production' slot the holding mechanism pushes back to reveal the sample's surface. Think window on a floppy disk only better at keeping dust out.

6mm is pretty small (about 1/4" to be precise) in diameter. Call it 25 mm^2 of area, you're looking at, maybe, 10K "device elements" you could lay down (I'm guessing you lose a bunch of space to 3 - 8 "pads")

Your "fab" is a machine that sits on a desk, it has tanks that hold 'consumables' that are used in the process and a tank that fills with waste output. Both the empty consumables tanks and the full waste tank are shipped off to be disposed of properly.

On working side of the fab there are ports that are of the same shape as the end of your sample holder. They positively lock so you push the sample holder into the port until it "clicks" and then the machine does an evacuation cycle, followed by that step of the process.

Building a device would consist of loading your design in, then using the various ports (sometimes more than once) for different steps (add resist/render layer mask) (cure resist/rinse uncured resist off) (etch) (dope-p), (dope-n), (metal), (poly), (package).

The 'package' would always be the same shape. Think a metal can transistor from the 80's. With 8 leads and a pin 1 tab. What leads were connected would be defined by what you had programmed for your device.

You do all the steps (it would probably take anywhere from 1 to 3 days depending on the process steps you used). And the result is this part you made. Plug it into your characterization harness and verify it's function and/or signal parameters. If it fails you toss it and do another one, if it passes you have your bespoke device to use in your project.

I certainly don't see something like that having mass market appeal but I know you could sell them.


Good concept for a benchtop prototyping apparatus.

If you looked into some of the early cheap Chinese glow-pattern trinkets and musical greeting cards, the IC often looked quite "home made", having a central irregular blob of epoxy covering the fabricated device in the middle of a small square ceramic substrate. With a few not-so-thin wires coming out from under the epoxy, tieing it into the discrete components, if any.

I would imagine at the time the primary requirement was for the component to simply cost less and be way smaller than an alternative stuffed PCB, without any real need for it to be fabricated as one of the actual "chips" off of a silicon wafer.

Depending on what power of "microscope" you are willing to limit yourself to, it might not be as difficult to see microchips (or at least micro IC's) in your forseeable future.


True, using "chip on board" (COB) assembly you don't even need a package. Alignment is challenging but certainly doable.


It's not the cleanroom that's the issue, it's mostly hazardous materials and some of the processes (gas deposition, sputtering) aren't garage friendly.

That said, there are people on youtube that are doing it, and it's doable to some degree in a garage environment, but the risks of dealing with hot HF acid are too much for most of us.


Keep large quantities of Calcium Carbonate on hand?


Not if you’re Sam Zeloof: https://www.wired.com/story/22-year-old-builds-chips-parents...

(But yes, to your point, most people are not)


> Even worse, it takes even professionals months to fine-tune a manufacturing process to make a new thing.

Months for simple things. Depending on technologies and requirements, the fine-tuning (increase yield and throughput) can take years.


I think home manufacturing of chips is the next mini industrial revolution. 3D printing was the last one, I'm looking forward to seeing where we go next.


The basic process of chemical etching that the article insists is at the core of so much of our exponential progress is (still?) being taught (including practical work) in middle school though.


Never used it, but you might find https://tinytapeout.com/ interesting.


Lower-level teaching resources definitely exist! Here are my favorites:

- The Zero to ASIC course (and Tiny Tapeout) [1] explains transistor circuits and teaches you an open source software stack---and you get a chip physically manufactured! You could make the Nand to Tetris computer in actual silicon (if you can get enough transistors).

- To learn how things are manufactured on silicon wafers, get textbooks on microfabrication. A decent starting point is [2]. There's also a good video series [3] for a quick overview.

- To understand how a single transistor or diode works, get textbooks on "semiconductor devices". A good starting point is the free online [4].

[1] https://www.zerotoasiccourse.com/ https://tinytapeout.com/

[2] "Introduction to Microelectronic Fabrication" by Jaeger

[3] https://siliconrun.com/our-films/silicon-run-1/

[4] "Modern Semiconductor Devices for Integrated Circuits" by Chenming Hu, https://www.chu.berkeley.edu/modern-semiconductor-devices-fo...


Thanks for the links, but notably there's a large gap between 'a single transistor or diode' and even the simplest real world microchip, such as the SN7400.

Everything before that stage, down to mining ore out of the ground, is understandable.

And everything after that stage is also understandable, at least to the level of an Intel 386 processor.

The gap is what I believe there are no resources online.


Cool project!

Right now y'all look focused on digital logic somewhere between ASICs and FPGAs.

Any plans for custom chiplets? Custom analog layout might be much cheaper if done MPW or Tiny Tapeout style: design a mere ~100x100um area, then bond it to standardized chiplets for control/power.


Thanks! Yes, we are focusing on big digital because that's where the big cost problem is. Also, the kind of block based design we are proposing requires a standardized interface. Th standard interface för analog is "a wire".:-). What kind of analog did you have in mind?


I believe this is the Voxa Mochii [1].

[1] https://www.mymochii.com/


Looks like it. Not quite as consumer grade as I expected; "contact us for a quote" = $$$.


You'd want a quote if you're spending that kind of money: https://www.voxa.co/solutions/mochii/faqs

How much does one unit cost?

Mochii starts at $48K for the imaging only unit. The spectroscopy-enabled version, which provides full featured x-ray spectroscopy and spectrum imaging, is $65K.

It has an integrated metal coater option available for $5,000, and we offer a variety of optical cartridge exchange programs that can fit your consumables utilization and pricing needs.


Yeah I kind of thought that if it was on YouTube then it was like $1000 or something. Apparently not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: