Hacker News new | past | comments | ask | show | jobs | submit login
A 60 GHz phased array for $10 (hscott.net)
252 points by blueintegral on Jan 21, 2020 | hide | past | favorite | 101 comments



I love how industry came up with ever crazier schemes to stream content from phones and laptops to TVs. There must have been three different attempts involving WiFi alone, but this phased array mmWave 60 GHz million bucks basic research abomination surely takes the cake.

Meanwhile, some Google engineer realized you could solve 90% of phone-to-TV streaming applications and 100% of the hard technical problems by just telling the TV to download and display the YouTube video itself. Genius!


Yes, genius if the content is asynchronous.

Any real-time or interactive display will need to be able to stream at sub-frame latencies. At 60fps that means less than 16ms, at VR friendly refresh rates ~90fps that means 11ms.

While their approach works beautifully for their core competencies, static and non-interactive streaming content, it doesn't really work for any other application.


To be fair, the idea of a centralized computer generating content for dumb terminals to display has been around since the dawn of computing. Terminals connected to mainframes. Local X servers drew content as requested by remote X clients. The idea of having all your software and data on a very powerful computer inside your home (or pocket) is the crazy new one.

Certainly the concerns and dynamics of the situation are different now than in the 70s and 80s, but some of the thought processes are the same. People want to stream video games because they don't have $2000 up front to lay out on a gaming PC. Streaming lets them pay $5 a month instead, and unlike credit, there is no commitment. That's valuable. Greed is another reason for the cloud. There is no reason why someone should pay $10 per month for Photoshop, but since it's the only option, people do. That's free money for Adobe's shareholders.

I can see why people try to poo-pooh this stuff; computing is built on hobbyist experimentation, and the cloud takes all that away. You can't write your own video game. You can't tweak settings, or make mods. You just get a game that someone else made. But from a technical standpoint, streaming stuff is probably going to work. I have less than 1ms ping to a nearby datacenter (speed of light distance: 8 microseconds), and so do 10 million of my neighbors, so it's probably quite profitable to have a collection of high-density GPUs and CPUs rendering games for a few peak hours a day and then training machine learning models outside those hours. The technical challenges are minimal; the idea has been around for 50 years. The actual challenge is getting the people who own the cables in the ground between your house and that datacenter to actually switch packets quickly enough to make it all work. When you were connecting a mainframe in the basement to terminals upstairs, you made it work because it was your job. But now, one company owns all the cables and another wants to make content to send over those cables, and the incentives no longer align. Sure, Spectrum COULD update their core routers... but they could also not do that, and then your video game streaming service is dead. (Meanwhile, they dream of showing up and making their own video game streaming service. They have as much time as they want, because they own the cables!)


I think it'll become easier to write your own cloud-streamed video game in the maker/hobbyist kind of way, even if the cheap or open source Stadia-workalike backend hasn't arrived just yet. (Of course Google might open up Stadia itself at some point too)


Doesn't stadia use it?


Stadia is riddled with latency issues. Taking numbers from Gamers Nexus review (https://www.youtube.com/watch?v=m0gILReDQsY) in a high framerate game it has 85ms input lag (22ms for local play) and low framerate games it has 110ms (61ms local). Of course these numbers depend a lot on your internet connection.


Even with mediocre routing Steam Remote Play beats these times by being 3x faster while also moving more Mbit. (~40 ms, 25 Mbit.)

The limitation on speed there was the ISPs insane peering scheme - Orange Poland is a dinosaur so everything cross network (that does not pay their scalper peering rates) got routed out of country and back.


And now it's easy to tell my TV to play a Youtube video via my phone instead of just using the remote, but a magnificent pain in the ass / impossible to share my screen to my TV or stream an actual video file directly. Pretty lame outcome.


You can share a Chrome tab or your desktop to a Chromecast. Playing a video requires a low enough bit rate and decoding requirements, but it works.

https://support.google.com/chromecast/answer/3228332


I remember hearing about these around 2008 and then the application being pushed was no cables to your fancy flat screen TV. Blu-ray player uses UWB RF so no cables are necessary. I don't think anyone saw then the end of physical media and the rise of streaming on the horizon.

I also remember in 2008 hearing about how RFID would soon be ubiquitous on consumer products like UPCs and you could just load up a cart with groceries and walk out the door without scanning anything. That one may actually pan out, but it is much later than it was supposed to be.


A sports goods store near me has finally implemented that. It's magic. All you need to scan is your debit card. Especially bewildering when you're not doing self-checkout. You dump your stuff on the counter and the cashier can pretty much instantly tell you your total.


You mean Apple. AirPlay was there years before google made a move in this area.


If we're going to talk about who did it first, DLNA (https://en.wikipedia.org/wiki/Digital_Living_Network_Allianc...) came out in 2003 and had the concept of a "renderer".

A renderer is a device that receives commands to go pull some media from somewhere and start playing it. And that can include video.

So the concept has been around a long time.


Well, DLNA and IPTV were never a mobile phone feature, but the point was simply that Google did not invent it. Thanks for agreeing with me.


IPTV happened in the late 90's according to WP, you could stream to your STB equipped TV from the internet or from your laptop. But of course we started seeing direct Youtube->TV only after Google started working with smart-tv and STB companies.


The f with downvotes? Just check your sources:

Airplay: 2010

Miracast: 2012

Google Cast/ChromeCast: 2013


Phased arrays are very cool tech. Personally, I can't wait for visible-wavelength optical phased arrays to hit the mainstream (they're just now being implemented), since they'd enable tech like legitimately holographic displays and video cameras with digitally programmable optical zoom.


Microlens arrays, for all intents and purposes of consuming and recording media (for human consumption), are essentially equivalent to ideal phased array optics (while having spacing much larger than phased array usual requirements). We perceive light non-coherently, phase information is lost to our eyes; microlens arrays suffice to reproduce a light field sans phase effects[1] -- that includes the perception of most phenomena that are affected by light coherence, like oil on water or viewing laser experiments (not totally sure, but I don't think there are experiments that can't be reproduced by geometric-optical light fields that don't rely on coherent measurement?; maybe some interference phenomena though?).

There are practical problems with the technology though, as light sources we can currently make have some minimum size limitations, and incoherent optical behavior starts to degrade at very small lens size (at or below micron scale I guess).

[1] You could probably create a good approximate phased array optics with led-scale (~ 10 micron scale) coherent lasers as light sources, but again I don't see any application that's not scientific


Microlens arrays seem very interesting! I haven't seen any that are very high resolution... Is that because they kill the resolution of the display they're placed upon (as well as technical problems with very small microlenses)? So, we'd need extremely high resolution displays for microlens array displays to look reasonable by modern standards?

Perhaps actual phased array optics wouldn't have that issue?

For those just entering this thread, [1] is an example of a rudimentary microlens array display.

[1]: https://www.youtube.com/watch?v=mGJe0AdszJg


Looking glass factory is currently producing a display that puts microlenses on an 8K screen. The video of it looks quite good. It outputs 45 different angles, so the effective resolution is roughly 640p, right about at the bottom end of HD. Not the best but it's good enough for an lot and I'm sure the limit will improve over time. It's only horizontal parallax, but that's generally fine for a fixed screen.


Have any further reading I can do on this?


https://www.spar3d.com/news/lidar/mits-10-lidar-chip-will-ch...

holographic displays would use eye trackers to show each eye a different image. Solid state zoom is maybe a bit of a stretch, but it would involve pixels becoming sensitive to angles more inward or outward from the sensor's center.


I'm not an expert, but I believe that's not how holographic displays would work with optical phased arrays. I believe a phased array can make it seem that light is being emitted from any point above the display (within the display angle of the opposite side of the display). There's no need to track observers, because it would be an honest reconstruction of the light emitted from a real 3d dimensional object.


I believe that's correct - see http://www.phased-array.com/1996-Book-Chapter.html "Front projection Images" for details.


This chapter by Brian Wowk is a good start. It explains the basics and compares optical phase arrays against holography (that poor word, keeps getting abused!):

https://www.phased-array.com/1996-Book-Chapter.html


not really phased array, but https://en.wikipedia.org/wiki/Lytro


The light field conceptually belongs to geometric optics. It doesn't involve phase or the wave theory at all. It would be either an input to or output from a phased array.

Phased arrays admittedly use the wave theory in a way which is geometrically simple. Maybe "far field" is the way to describe it.


Wait, how does that work? You basically play around with beamforming and/or phasing at 480-750THz and light appears...?!


>Now the bad news: SiBeam was bought by Lattice Semiconductor, and right before I gave this talk, Lattice shut down the entire SiBeam organization and ended support and production of this part. I didn’t find out about this until months later, when I contacted the sales engineers I had been talking to about this part and they told me what happened.

This is one thing that really pisses me off. Time and time again you've got small(ish) companies doing interesting stuff, succeeding and then they step on a landline. They do something that gets them in the cross hairs of a big company and suddenly BOOM big company buys small company for ridiculous money and then inexplicably shuts down 90% of what the small company was doing. The sale happens for a nice premium and yet the second the sale is closed 90% of the things that the company did that made it valuable are jettisoned. How can it be that these companies can afford to buy companies at a premium, throw away massive parts of the value of the company and yet: this obvious value destruction seems to be standard operating procedure for large companies.


> How can it be that these companies can afford to buy companies at a premium, throw away massive parts of the value of the company

It's almost like the lack of robust anti-trust prosecution by world governments have so enriched large, rent-seeking companies that they can literally afford to burn money and still come out ahead...


On the other hand, preventing acquisitions reduces available exits and might discourage future innovation (which in turn might promote more trusts).


There needs to be a larger gradient of funding options than "Waste cash until unicorn" or "rent-seek until next bailout", and "dominate small-to-medium market niche" or "sponsor and penetrate next manufacturing commodity".

We've seen so much wastage from the prevailing financial model in SV tech.


I don't understand. Your options are (1) be small, try to grow fast, (2) be big, (3) be small, don't try to grow fast, and I'm not sure what (4) means. What else is there?


What behavior would prevent a technology like ultra-cheap phased arrays from being locked up due to the corporation seeing some potential in either the technology or the team to buy them, but then not giving both the leeway to develop the market for the technology further?

In this specific case I guess we don't know the full picture of what Lattice Semiconductor intends to do, but there are many examples in software of startups getting acquihired and then the team dissolving into new projects that are more familiar or closely aligned with the pre-existing business model of the company.

Since it's always possible to just turn the startup into a subsidiary I'm sometimes confused as to why this happens, unless if it's an issue of maybe brand dilution or the market opportunity being too small to be worth the overhead of keeping a separate entity tied to a larger one. Which is a part of why more opportunities for low-growth or long-tail companies would be important, since now in the case where the means for bringing the IP to the market are eliminated no one gets anything at all.


Use-it-or-lose-it provisions for all acquired technology, by entities over a certain size?

If you are demonstrably developing a piece of technology, kudos. It's yours, you bought its owner.

If you are not doing anything with it, you're required to offer FRAND license terms to anyone interested in the technology.

Would at least make the tech available that's currently getting tossed in a corporate closet in the basement.


I can't believe this is getting downvoted. It's the equivalent of sticking your head in the sand and hoping that if you can't see a bad thing then that bad thing doesn't exist. This is a clear case of incentive design and too strong of an arm in preventing acquisitions can come back to bite you. Don't pretend this isn't the case, acknowledge it and factor it into your beliefs around optimal anti-trust law.


That there exists a strictness of anti-trust law / prosection which is harmful does not opine on whether or not stricter anti-trust law would be harmful or beneficial.

So a bit of a red herring.


The founders obviously wanted to exit


everyone wants to exit, the only question is how many zeroes it'll take for you to admit it. If I offered you a billion dollars for 100% of your startup today, would you really actually not exit?

what's the old saw? "now we're just haggling over the price"...

https://quoteinvestigator.com/2012/03/07/haggling/


Then how do we realign capital to stop keeping large loss-running companies or now slow-growing companies on life support so that basic innovations can still penetrate the market?

IIRC Zuckerberg had the option to relinquish control of his company but other than taking that sweet sweet In-Q-Tel dollar still did his best to stay at the helm.


Agressively tax large organisations. There'd be a cost to that, but as this comment chain is discussing, there's also a cost to leaving them with the money and allowing them to use it to stifle innovation.


Aggressive taxation is a prisoner's dilemma involving every government in the world.

Good luck solving that problem. (Honestly, I wish good luck to anyone trying to solve it. Corporate tax is fucked.)


There are massive cash reserves that Google and IIRC Amazon have that are on the books let alone in off shore bank accounts. There is room for better tax policy here for sure.

What should be more surprising though is how e.g. Google has ties to USG (through Schmidt) but can still escape harsher taxation. At the same time – maybe that's why they can escape harsher taxation... because USG benefits by other means.


I would instead move toward "aggressively tax passive investment above some limit."

The issue isn't large amounts of money. The issue is large amounts of idle money.


Is passive investment really idle? If you give it to a bank, they turn it into loans. If you invest in stocks, you're driving up the value of the shares so the company can sell at a premium to innovate. If you invest in corporate bonds, you're giving cash to companies to innovate (or at least helping increase the value of loans, which decreases future interest rates, encouraging innovation).

The only way to make your money useless is to hold cash since it's designed to decrease in value over time. Pretty much everything else is promoting some kind of innovation/growth.


> If you invest in corporate bonds, you're giving cash to companies to innovate (or at least helping increase the value of loans, which decreases future interest rates, encouraging innovation).

The point is that the way banks or other consolidated funds allocate this funding to corporate accounts, can be either inefficient or misanthropic if you take a certain stance toward innovation, namely that innovation isn't quite the same as rent capture and should be more than just efficiency for existing processes.

Share price can decouple from the actual profitability of a corporation which gives the bank the option of either selling or providing a cash injection if the corporation risks solvency. And then if they run out of everyone else's cash they can ask the Fed to print more just so that they can continue the cycle of ownership for that corporation.

But all that does for the economy on a whole is (a) raise the total rate of inflation for everyone, including people who aren't invested in the bank and (b) consolidate more and more assets into the holdings of these banks, and by association the actual wealth (while not actually creating anything that can give you more money than you put into it, otherwise you wouldn't need the cash injection).

This is what 2008 gave us: a feedback loop where more growth of non-profitable companies needs more inflation, and more inflation requires more growth from your holdings.

If Uber and Amazon are anything to go off of this strategy is actually preferred, probably due to the thesis that "software can eat the world" and you ought to lose as much cash as possible to "innovate" new interfaces to commodity industries at any cost, which essentially optimizes existing supply chains instead of finding use for novel technological capabilities.

Maybe I have the wrong idea that innovation and profitability go hand in hand? Or that the tech industry should be about tech?

This doesn't have anything to do with the Reserve but it's still an illustrative example: we've been here with WeWork where the business failed and yet the CEO was rewarded billions of dollars on exit. Who knows what he intends to do with that money now as a failed innovator. In hindsight maybe the reason that Adam was given such a large exit package was because of how he facilitated one of the fastest foreign asset takeovers of the last 40-50 years?


And in the end all those big companies become banks.


This linked article says this chip was produced to support a standard that didn't catch on. It's not surprising that it's no longer in production if this is true.


>yet the second the sale is closed 90% of the things that the company did that made it valuable are jettisoned

If it were valuable they wouldn't be jettisoned


This is taken as axiomatic by people who believe in the efficient market hypothesis, but I don't think there's much reason to believe it's true.


This isn't new technology. SiBeam came out of Berkeley Wireless Research Center in the mid-2000s. They had mmWave phased arrays from the start (60 GHz is pretty much useless without a phased array, or a large dish antenna if you're outdoors), but in 15 years, they never managed to find a compelling consumer use case.

I think at that point, the burden is on the company to prove its value.


The efficient market hypothesis implies that P==NP. There's a LOT of reason to believe it's false.


How so? Can you elaborate? Genuinely interested


Well, a naive, overly strong formulation of the efficient markets hypothesis may imply that P=NP. Something like "an optimal trading strategy is a function depending on the entire market history, and in order to find an optimal trading strategy, one must check an exponentially large space of such functions."

The paper is here: https://arxiv.org/pdf/1002.2284.pdf. They do a sketchy reduction to an extremely stylized model of the market from the knapsack problem and 3-SAT.


Here is a paper on it : https://arxiv.org/pdf/1002.2284.pdf


Google 'likelihood of P = NP' or 'efficient market hypothesis P = NP'


Rather than using the metaphors from one field in another field whose day-to-day has little to do with those concepts, try reading a book:

https://www.amazon.com/Efficiently-Inefficient-Invests-Marke...


Even if the EMH were true, it only says "_IF_ a market is efficient, THEN all <modifier depending on the EMH version> information is included in the price."

The EMH does not imply that any particular market is efficient, and if the market isn't efficient, the EMH doesn't apply.

Lots of people do appear to assume the axiom "All markets are efficient", but that is plainly incorrect.


I think there was some nobel prize winning refinement to the theory that basically said markets approach theoretical efficiency in the limit as transaction costs go to zero, and interesting deviations from efficiency happen because transaction costs are not zero. Like, the whole reason we have firms and markets in real life is because zero transaction costs don't exist globally and so it helps to have ways to reduce them here and there.


Unless gasp people can make mistakes


Some companies are purchased only so they won't become major competitors.


Companies are not primarily purchased, but sold. The previous owner could've continued, but they've chosen not to. We can't dictate them what to do, right?


The people selling (or at least that have any say in making the sale) and the people doing are more or less always different sets of people.

It is not a matter of a blacksmith hanging up their hammer but boardrooms playing wealth games. I wouldn't shed any tears for restrictions and obligations on the actions put on the latter.


I agree, but it isn't surprising that this part of the company was shut down. The article says the chip was designed to implement a protocol that ended up going nowhere (Wireless HD). Lattice was probably interested in the tech and patent portfolio more than the actual product, even though it was an impressive chip that doesn't mean it was profitable.


Probably because the technology didn't make it as a standard but they wanted the expertise.


Value to the end customer does not equate to value to the big company though. It could be that the products that were shut down were barely turning a profit.


Was that actually a major part of the value of the company? It probably would've been if the standard the part was intended for ever became widespread, but it may well be that running the production primarily for the occasional single-part purchaser simply wasn't profitable.


Lattice didn't exactly buy SiBeam. They bought the company owning SiBeam.


> What would be really cool is to build a USB board that plugs into one of the SB9210 boards and connects to gnuradio. You could do all kinds of neat radar experiments, presence detection, beam forming, you name it. Kind of like a 60 GHz RTL-SDR.

Maybe a dumb question, but how is it even possible to do SDR with 60GHz signal on a ~4GHz CPU via a 5Gbps USB3 connection?

EDIT: I guess via down-conversion? https://en.wikipedia.org/wiki/Digital_down_converter


The data itself isn't 60 GHz, you're just modulating it onto a 60 GHz carrier. https://en.wikipedia.org/wiki/Modulation


Or just buffer each broadcast segment into some 60GHz memory (is there such a thing?), and either rely on your signalling protocol being TDMI (and you therefore not needing to send/receive more than a fraction of the time) or on your broadcast signal being periodically repeating, such that the radio can just treat that memory as a ring buffer to play over and over.

Or, do what old computer architectures did when their CPUs were slower than their DACs: add a Programmable Interval Timer (i.e. a very simple synthesizer) in between, such that you just send a few commands and it adds together some 60GHz triangle and square waves to achieve the signal shape you want. Maybe even add a sequencer, and then stream it some 60GHz MIDI files!


For a concrete example, infrared (such as what's used for TV remotes) typically transmits data on a carrier frequency of 38Khz, but the actual data rate is much less, just a few hundred bits per second.

https://techdocs.altium.com/display/FPGA/NEC+Infrared+Transm...


Modulated carriers are rarely directly synthesized by logic circuitries, even with phased arrays. You tap off a “white light source” of correct wavelength just like you do with LCD.

TX/RX rates are thus independent to the carrier frequency. Only local oscillator must to be able to be configured to the correct carrier frequency.


Since the wavelength of 60GHz is approximately 5 millimeters, this technology is sometimes referred to as millimeter-wave (mm-wave). (Copied quote).

That explains how close together the antennas are - close enough compared to wavelength to be able to beamform.

Edit: also explains why it would be extremely difficult to build something yourself at 60GHz - where every wire needs length to be matched to submillimeter length, and a submillimeter tail acts as an antenna and as an electronic component.


Does anyone recall in the Long Dark Ago when there was a startup that was planning to embed a phased antenna array into a cubicle wall?

It still gets me the level of miniaturization that happens when you come back to an idea 20 years later, instead of watching the incremental changes along the way.


Is this appropriate for creating a wireless HDMI interface for VR headsets?


"That chip was the SB9210 from SiBeam. This part was originally intended to be used for WirelessHD, a protocol for wireless video streaming that never took off. ...at one time they were included in some smart TVs and in some high end laptops."

It sounds like it would be very suitable for a VR headset.


I am pretty sure HTC's Vive Wireless adapter uses a 60GHz link made by Intel ("WiGig").


Yep. It’s already being done by oculus or one of those VR startups.

https://www.displaylink.com/vr


Datasheet says it add 5ms latency, latency in VR causes nausea.


I'm not up to date on VR tech - is the video memory on board the VR headset? If the memory is onboard, you could do some simple rotations and translations on the current frame while the render pipeline caught up.


There's an overview of some of the various techniques that Oculus uses here: https://uploadvr.com/reprojection-explained/


5ms isn't yet at nausea levels though...


In 2014, Michael Abrash gave a talk summarizing what's needed for a feeling of presence in VR. He said 20 ms motion-to-photon latency is required for the virtual world to feel like it's "nailed in place". So 5 ms is 25% of the latency budget.


Not all motion-to-photon latency is created equal.

EDIT: To clarify what I meant, HMDs typically use techniques like ATW/ASW/etc (link posted by Rebelgecko) to do just-in-time correction of the rendered image. The end-to-end motion-to-photon latency of the entire pipeline grows but the part that generates "sick" is very short.


The number I've heard, don't ask me from where, is 16 ms.


That's framerate (how often images need to be rendered) at 60hz. Latency is separate from that: you can have a 60hz framerate with 24h latency if you watch a video you recorded yesterday.

In the parent's case, 20ms latency from movement to visible motion is part of the pipeline that:

- reads input

- evaluates solution

- returns solution to your screen

All kinds of things add to this latency: polling frequency of your input device, bus speed of the device, how fast you can update the world, how fast you can render the update, how quickly that updated image can be sent to the screen, how quickly the screen is able to turn this into visible light. etc.


60FPS ~= 16.67ms/frame. So if you don't want to drop frames on a 60Hz monitor, your frame time budget is 16ms. If you want input to appear on the next frame, you've got at most 16ms. If you're targeting a 120Hz monitor then you've got 8ms. Etc.


I think his point mught be "adds" (increasing existing latency by 5ms)


I haven't used it but the reports I have seen say the vive wireless seems to work fine without noticeable latency.


Maybe silly question, Are there some other commercially available chips with phased arrays?

I believe RTL SDR did extend the RTL products end of life much further.

COuld this occur with this (or similar) phased array chips?


The closest thing i can find is this similar sounding 60ghz radar chip on digikey by Acconeer, but it doesn't look like you could control it or use it like the sibeam does...

https://www.digikey.com/product-detail/en/acconeer-ab/A111-0...


Yes, I was indeed mistaken, this is just a Doppler module and not a phased array. Bummer


I wonder what sorts of things the chips were used for inside laptops and smart TVs. He mentions it being used for streaming, but it's a directional radar chip, seems it would be used for doing a 3d scan of an area?


It's a directional transceiver. Directional transceivers happen to be usable as radar, but these were not intended for that usecase: https://en.wikipedia.org/wiki/WirelessHD


Whatever happened to that Google millimeter radar project to allow devices to see your hand positions? It ended up in the Pixel 4 as Motion Sense, but all it does is let you make swiping gestures in the vicinity of the phone. There should be more useful applications for that technology.


It's not limited to Google. For example, this TI mmWave sensor (http://www.ti.com/product/IWR6843) has an associated reference design for gesture control (http://www.ti.com/tool/TIDEP-01013).

(Disclaimer: I've never actually used that chip or reference design and have no idea how well it actually works in practice. I just think it's really neat that mmWave radar chips are readily available at very affordable price points.)


The last application I saw was using micro Doppler signal recognition to determine if someone has fallen down. Consider a nursing home or hospital bathroom where cameras would be considered intrusive.


I wonder what sort of API could be used to control that kind of phased array. I guess that's the point of the blog post though: asking for help with reverse engineering its interface.


Bought some from ebay... Cost about $26 but the post has been out ~2 weeks. Can't wait to play with it and see what can be done.


Hope you make a post about it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: