Surface Acoustic Wave filters are used in RF signal processing. They work by converting an electrical signal to a sound wave on the surface of a piezoelectric material, then using mechanical resonance effects to filter the signal to a very narrow band before converting it back to an electrical signal.
In this setup, they have figured out how to amplify the sound waves without having to convert them back to electrical signals first. The first prototype got hot, so they could only run it in pulses... the second device worked continuously, and delivered about a modest amount of power.
Eventually this technology will work its way down into all of our communications gear, making it even smaller, and more efficient.
Your instinct is spot on - it does not, and just does high frequencies in the RF range using acoustic resonance to amplify instead of using electrical amplification. https://en.wikipedia.org/wiki/Acoustic_resonance
One big aspect of this is that is from what can tell, has lower noise and with that a pre-amp/amp that has less noise in the RF domain is epic, it will allow lower power to be used for transmit and also offer longer range at the same power used today. But an amplification that has less noise than existing electrical only options has so many uses that for discovery - I'd rank this up there with graphite.
I should have clarified "acoustic" just means a vibration, like sound, but in this case 274 megahertz, about 14,000 times higher pitch than human hearing... twice FM radio transmissions.
It eventually should result in devices that work past WiFi frequencies.
It doesn't work with low frequencies, it works with very high frequencies but in the acoustic domain. You're looking at wavelengths on the order of micrometers.
It's because the speed of sound (in any material) is much slower than the speed of light (even in high dk materials). To make a selective and low loss filter you want to use resonators which have dimensions which are a large fraction of a wavelength (i.e. a quarter wave). When the wave speed is slower, a wavelength is compressed into less space ( cycles per second / meters per second ) which is what enables the miniaturization.
I believe the acoustic loss in many of these materials is also lower than the dielectric loss in most materials, which also improves filter q and achievable selectivity.
From what they said, an incoming RF signal is converted to acoustic signal. That acoustic signal is probably not in the human audible range. It’s that signal which is filtered and then amplified with this new device, IIUC.
It is thousands of times higher than the highest frequency that you can hear. That doesn't rule out interference with another frequency leading to a difference that is in the audible range but all things considered this should not happen. Also, there are no coils involved, it's piezo-electric.
"The gain mechanism, which is analogous to an electromagnetic traveling-wave tube amplifer32, depends on the average interaction of the charge carriers with the piezoelectric electric field. This interaction controls the energy transfer from or to the acoustic wave, resulting in acoustic wave attenuation or amplification, respectively."
We just returned to office recently, and a team I hadn't worked with in about 6 years just moved to the next row over. Their second day in their new location, and I overheard them talking about a rare segfault (thousands of processes, hits them about once a month) that had been perplexing them for months. I had a bit of intuition and a good guess, and in a few hours had an ugly proof-of-concept fix they were testing. It took another week for a colleague to clean up, productionize, and properly test a robust solution.
It got me thinking about ways to artificially introduce opportunities for such serendipity in a remote work environment, but I haven't come up with anything good yet.
Maybe a channel that's for weird problems? They'd still have to explicitly post the weird problem, of course, so it's not nearly as good. But we get some good serendipity in my work Slack when people just post mysterious-to-them experiences that they're still working on.
A remote office has a LARGE wall TV relaying images and sound from another remote office. Set the volume so that normal tech talk is just at the threshold of hearing.
and/or co-working spaces where we can cross-pollinate ideas with people on the fringes of our own experience... or even people with various socioeconomic backgrounds and demographics... there must be a sweet spot between "old school IBM suits" and "coding from soup kitchens"
It's not equivalent, but people create a similar dynamic online. IRC is a great example. Today there's several slacks/discords I idle in that are just friends sharing what's going on throughout their day. Both contexts have advantages and disadvantages imo.
> [At] Bell Labs I came into a very productive department. Bode was the department head at the time; [Claude] Shannon was there ... I shared an office for a while with Shannon. At the same time he was doing information theory, I was doing coding theory. It is suspicious that the two of us did it at the same place and at the same time - it was in the atmosphere.
Digression: why do sites like Wikipedia and Twitter use different URLs for their mobile, given that in the end it’s essentially the same stuff being served?
For Wikipedia, I imagine they have interests in making the content most accessible on the maximum number of devices. Which means pages crafted to be lightweight and minimal Javascript. Yes, you could do that all in one page, at the altar of developer velocity, but I think the strict isolation is user forward.
Wikipedia launched that in July 2013 in response to rising web traffic and the existing site not working well on mobile browsers. That was preceded by Wapedia, an independent site that started in 2004 to provide a reformatting proxy for mobile use: https://en.wikipedia.org/wiki/Wapedia
The basic answer is that early on the tooling wasn't there to support very difference devices from the same HTML. You could try tricks to detect mobile devices and serve different content. But it was much easier just to let people decide which they wanted.
I haven't kept up, but I know the tooling keeps improving, so I expect that few people today are creating mobile-specific URLs, and that the existing ones will go away in the long run.
My guess is that it was easier to set up that way at the time, and then the configuration was kept when a mobile version was integrated into MediaWiki itself.
Flash memory requires relatively high voltage for block erase operations, but this is at least twice the voltage that flash uses. It's likely to adopt the same strategy of including charge pumps on board to generate the required high voltage from one of the lower voltages available from the system.
I'm sure some parts of power conversion circuits might momentarily reach some voltages like that, particularly when you've got inductors behind switching circuits. The wiki page for Qi says that transmitters circuits can reach 50v, and I believe a few phones can serve as Qi chargers.
Instead of having RF interference, if we have acoustic transmitters, would you have trouble getting a signal at a loud concert for instance (acoustic interference)?
These are operating on EM/sound at frequencies magnitudes higher than anything in the audible range; there's basically no chance anything you can hear (no matter how loud) could interfere with this.
But are there things that produce “loud” acoustic waves at these ultrasonic frequencies out in the natural or manmade world, that we mostly just ignore because we can’t sense them? (in about the same way that we ignore e.g. the massive amount of radio interference that switching power supplies create, because we don’t use those bands for anything)
This technology is entirely analog. It has nothing to do with quanta. It simply converts electromagnetic signals of a certain band, into electricity, through an acoustic coupling, and amplifies the signal too, in an all-in-one manner.
>"While most radio components, including amplifiers, are electronic,
they can potentially be made smaller and better as acoustic devices. This means they would use sound waves instead of electrons to process radio signals.
“Acoustic wave devices are inherently compact because the wavelengths of sound at these frequencies are so small — smaller than the diameter of human hair,” Sandia scientist Lisa Hackett said. But until now, using sound waves has been impossible for many of these components."
[...]
>"Previous researchers hit a dead end trying to enhance acoustic devices, which are not capable of amplification or circulation on their own, by using layers of semiconductor materials. For their concept to work well, the added material must be very thin and very high quality, but scientists only had techniques to make one or the other.
Decades later, Sandia developed techniques to do both in order to improve photovoltaic cells by adding a series of thin layers of semiconducting materials."
[...]
>"Fusing an ultrathin semiconducting layer onto a dissimilar acoustic device took an intricate process of growing crystals on top of other crystals, bonding them to yet other crystals and then chemically removing 99.99% of the materials to produce a perfectly smooth contact surface. Nanofabrication methods like this are collectively called
heterogeneous integration..."
PDS: Let's revisit this quote: "This means they would use sound waves instead of electrons to process radio signals."
So, I'm going to go for "full crackpot" here...
How do we know that electrons aren't sound waves -- and/or radio-wave-like -- in nature?
?
If <thing A> in the universe can be converted into <thing B> via a transducer, then that shows us that <thing A> and <thing B> share a relationship which might not be completely understood according to the scientific reasoning of the day...
Compare to the concept of "identities" in Mathematics.
Once you have a mathematical identity for some term in a math equation, then you can reason about that equation in different ways that you couldn't before -- with the guarantee that because the mathematical identity has previously been proven, the reasoning about the equation, substituting a term with one of its proven identites -- will be correct...
In other words, can we reason about electrons as sound waves, and/or as radio waves, and will the results of that reasoning be correct?
> In other words, can we reason about electrons as sound waves, and/or as radio waves, and will the results of that reasoning be correct?
The movement of electrons is a bit like the movement of air; that's why they share the relationships. This doesn't tell you much about the nature of air molecules or electrons. (It's actually using a solid, not air, as the acoustic medium, but my point still stands.)
Look into the formal definition of a field in physics, and particle / wave duality, to gain clarity on electrons vs electromagnetic waves.. the math of wave mechanics is generally common across waves propagating in different types of fields, although I’m not sure if that’s what you’re getting at.
In this setup, they have figured out how to amplify the sound waves without having to convert them back to electrical signals first. The first prototype got hot, so they could only run it in pulses... the second device worked continuously, and delivered about a modest amount of power.
Eventually this technology will work its way down into all of our communications gear, making it even smaller, and more efficient.