One misconception that many make regarding the Nyquist frequency is thinking that the sampling rate needs to be twice the highest frequency.
Your sampling should really really be twice the bandwidth.
e.g. your bandwidth is 100 MHz centered at 1 GHz (it needs to actually be bandlimited to 100 MHz**). You do not need to sample at 2.2 GHz. You sample at 200 MSPS (really, you should sample a little more than that, say 210 MSPS, so that the bandwidth of interest doesn't butt up against the Nyquist zone edges.)
The folks who are telling you you’re wrong don’t understand Nyquist’s criterion very well. Curse those undergrad courses for only effectively teaching about Nyquist at baseband frequencies.
You can sample 100MHz of bandwidth at 1GHz just as you describe at 210MSPS. You’ll get everything in the 950-1050MHz band.
Trouble is, without an antialiasing filter, you’ll get every other band that’s a multiple of that sampling rate. The Nyquist criterion works at every multiple of the sampling frequency.
Bandpass filter your analog input appropriately from 950-1050MHz and you’re golden.
This is the way nearly every commodity Wi-Fi chip downsamples 2.4/5GHz raw RF. Sigma-delta ADCs are cheap, fast, and space efficient for die area using this method.
The most fiendish application of this effect that I've seen is polyphase filtering. I can't remember the details, but at the time I can remember the wonder of understanding (in a lecture by fred harris) how most the logic was running at a low sampling rate yet the input was at a high rate. The mixing was done by aliasing.
Polyphase filtering is less crazy than it initially sounds. Conceptually, you can think of it as: I have this signal in frequency f. I want to resample it to frequency (b/a)*f, where a and b are integers. (You can also do polyphase filtering to resample of non-rational or varying ratios, by essentially approximating towards a rational, but let's ignore that for the moment.) a and b can be pretty large if you want, e.g. a=160,b=147 will downsample from 48 kHz to 44100 Hz.
So what you do to resample a signal (again conceptually), is: 1. Add <a> zeros between every input sample (which repeats the spectrum <a> times), 2. Apply a suitable (long!) FIR lowpass filter so that the signal is bandlimited, 3. Take every <b>-th sample (which doesn't cause any aliasing due to #2).
Now the core of the polyphase filtering idea: We don't need to actually calculate the FIR filter for the samples we don't want in #3. And most of the input values to the filter will be zero due to #1. So instead of storing all the zeros and stuff, we simply pick out every <a>-th tap of the FIR filter and use that on the input signal directly. But since a and b don't line up perfectly, this means we get a different subset of the FIR filter for every output sample; we have a time-varying filter (or a filterbank, if you want). You get <b> different such filters before you're back where you started.
Implemented a polyphase filter in Verilog once. I learned the hard way that it’s easy to mix in unwanted stuff into your polyphase chain if you’re not careful with your implementation.
I know what you're getting at, but your statement, as others have pointed out, is incorrect. Your sampling rate always always has to be twice the highest frequency of the signal you are sampling.
If you are sampling an RF-modulated signal with a center frequency of 1GHz and 100MHz of baseband bandwidth, then yes, you do need to sample at 2.2GHz+. And some applications do exactly that.
If you're taking the RF signal, mixing it down to baseband, and filtering it to bandlimit, then you have a signal with maximum frequency component of 100MHz, and in that case, yes, your sampling rate can be 200MHz+
From an information theoretic perspective (which is the perspective Nyquist was originally coming from, though it didn't yet have that name), you don't need to mix the signal down. Assuming it is truly band-limited, you can sample the signal directly at RF, and reproduce it from those samples. Additionally, you will need to modulate the reproduced signal into the original band, which means you need to know where that band is - perhaps this is the detail you're pointing out?
Another way of looking at it is that sampling inherently does the mixing down to baseband. Although it may not be exactly the baseband you want if the spectrum isn't cleanly symmetric about a multiple of the sample frequency.
I've worked on ultrasound systems that definitely worked this way, not just in theory but also in practice. Bandpass filter 20–40 kHz, sample directly at 40 kHz (giving 20 kHz bandwidth). No mixer step involved, but your spectrum becomes inverted (e.g. if you do an FFT, a 22 kHz tone will be in the 18 kHz bin, not the 2 kHz bin as you would perhaps expect).
Aliasing makes more sense (to me, anyway) if you think about the spectrum of complex signals, in which signals of real samples are modeled as the sum of positive and negative frequencies.
In the sampling operation, all sinusoids are shifted down to the "natural baseband" by adding or subtracting some multiple of the sampling frequency that places the resulting frequency within +/- half of the sampling frequency. So for your example of 22kHz, that real frequency has two components: +22kHz that gets shifted down to -18kHz=22kHz-40kHz, and -22kHz that gets shifted up to +18kHz=-22kHz+40kHz.
Note that this "natural baseband" is an abstraction of our own invention. You can just as easily think of the spectrum as ranging from 0Hz to the sampling frequency f_s, rather than -f_s/2 to f_s/2. The fact that some prefer one over the other is precisely why fftshift exists.
To clarify: "band-limited" usually means X(w) = 0 for abs(w) > B for some B, where X is the frequency spectrum. And that's the definition Shannon used in the original proof, which is where the idea of Nyquist Frequency comes from.
If you add the additional constraint of the signal being "bandpass-limited" where, X(w) = 0 for A > abs(w) > B for some A, B, then yes, you can under sample.
And that's where the information-theory idea comes in where the amount of information contained in the band only "needs" 2X sampling rate to reconstruct perfectly.
You can think of aliasing being somewhat orthogonal to that in the sense that you need 2X bandwidth so you don't corrupt the signal, but 2X max frequency so you don't alias anything else into the signal. (I say this realizing that aliasing is what would cause the former signal corruption, hence "somewhat")
"In signal processing, undersampling or bandpass sampling is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate (twice the upper cutoff frequency), but is still able to reconstruct the signal.
When one undersamples a bandpass signal, the samples are indistinguishable from the samples of a low-frequency alias of the high-frequency signal. Such sampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF-to-digital conversion."
Yes, but this only works if, as the page points out, the signal is bandpass filtered, which GP did not mention. It's not true in the general sense, nor is it practical for many (most?) RF systems, especially those with multiple channels.
I can see why you might think that but consider that in RF systems, while the wanted signal is bandlimited, you also have a lot of unwanted "blockers" all over the spectrum that need to be dealt with before sampling.
I'm afraid you're mistaken (source: worked as DSP engineer for 15 years). Often you apply your filter around the RF frequency you want and then sample at a lower rate. You're right that the signal will get aliased doing that, but the information is always preserved.
If you sample s.t. your folding frequencies are in an appropriate place, you can fold your desired region into the first nyquist region without needing to mix it down. This is especially desirable if you can avoid having to build an IQ mixer because they're hard to keep balanced.
The worst case doing this is that your signal spectrum is reversed in frequency, but you can correct that easily digitally.
I'm afraid I'm not mistaken (source: I design integrated RF transceivers) ;)
Yes, you can subsample if you have a suitably bandpass-limited signal. But that's not the general case, nor is it what the nyquist-shannon theorem proves, which is where "nyquist frequency" comes from.
Nyquist frequency by the original definition is 2X highest frequency, though some papers textbooks evidently have started using it to mean 2X bandwidth, enough so that wikipedia[1] actually mentions it.
In integrated circuits, IQ mixing isn't problematic as we can fairly easily do gain and phase calibration to correct for the mismatch.
You have to have a band limited signal to sample anyways, where it's at in the spectrum doesn't matter. The first thing you'll do before feeding anything to an ADC is running it through a filter to make _sure_ it's band limited. Whether that filter's at DC or some Rf doesn't matter.
Here's the result from his original paper where he specifically says that it doesn't have to be at DC:
My point is that practically speaking, it does matter where the signal is, depending on how you filter it. If you lowpass filter an RF- (or, more realistically, IF-) centered signal, you can't just sample it at 2X bandwidth because you'll get aliases from the unwanted content between DC and the bottom frequency edge of the signal.
It may not be a common scenario anymore, but it was very common in the early GSM days when the signal wasn't mixed to DC but near-DC.
Ah yes you're right that you have to be careful, it'll fold at multiples of the nyquist frequency and you want to make sure your SOI is entirely contained in one of those zones.
That's true, but there are a couple of things more. First, your DAC or ADC need to have such analog bandwidth. Working in a higher Nyquist zone also require higher amplification since the signal would be considerably weaker and more complex filtering to remove the signal from the other zones
Or use the traditional "lock-in" amplifier technique of mixing with a known reference at the frequency mid-point of the range you care about? (That's how NMR spectrometers / MRI scanners worked for decades
Isnt the lock-in amplifier technique used to improve the SNR ratio of a signal by filtering out noise at frequencies outside a specific range of interest? High-speed sampling would still be required to accurately measure transient signals.
Consider a signal whose value at x seconds is f(2x) - 2 f(3x) + f(4x), where f(x) = sin(2πx)/x. Considering that the absolute frequencies of f(x) are uniformly distributed from 0 to 1 Hz, the absolute frequencies of this total signal should be constrained to between 2 and 4 Hz. Thus, a bandwidth of 2 Hz. But if we sample at 6 Hz (three times the bandwidth!) including x = 0, we'll get all zeros.
Granted, we might say that from the perspective of the complex Fourier transform using signed frequencies, the frequencies of this signal actually range over [-4 Hz, -2 Hz] U [+2 Hz, +4 Hz]. But I'm not sure that's the interpretation you had in mind.
That is, it's not quite as simple as saying you just need to sample at any frequency at least twice the bandwidth. Rather, it's the more complicated behavior described by this graph: https://en.wikipedia.org/wiki/Undersampling#/media/File:Samp.... That is, the general rule is that the ratio of the highest frequency in the signal to half the sample rate, and the ratio of the lowest frequency in the signal to half the sample rate, have to lie within an interval of consecutive natural numbers.
When the lowest frequency is zero, this is the familiar rule that the sample rate has to be at least twice the highest frequency in the signal. But more generally, it's more complicated.
Whoops, I should've pulled the division by x out of the definition of f. The example I had in mind was [sin(4πx) - 2 sin(6πx) + sin(8πx)]/x. [Another good example is [sin(6πx) - 2 sin(8πx) + sin(10πx)]/x, whose frequencies are between 3 Hz and 5 Hz, thus a bandwidth of 2 Hz, but sampling at 4 Hz or even 8Hz gets all zeroes.]
Anyway, the details on that example don't matter, the Wikipedia graph and article makes things more clear.
Is this assuming you have some analog hardware that's demodulating the signal in front of your ADC? How do you demodulate a signal from a 1GHz carrier with 200 MSPS?
As the sibling comment mentioned, you don’t need to demodulate first, because that is actually what the sampling process of your ADC does.
You can think of it as multiplying the original signal by a comb (in the time domain) of delta functions, which folds everything (in the frequency domain) back into the nyquist frequency of your ADC. Each delta function corresponds to one sample. If your original signal was truly band-limited to 100MHz, then what comes out is a replica of the band limited signal.
One catch (which is actually fairly easy to do in practice) is that the sampling window needs to correspond to around 1/f of the carrier frequency. This is what YakBizzaro is talking about (ADC analog bandwidth) in their sibling post.
Thanks for the explanation! Between your comment and the Undersampling wiki page diydsp linked to I think I am on the path to enlightenment.
> If your original signal was truly band-limited to 100MHz
In practice, this means you need to band pass before the ADC, right? i.e. "signal" in this case is the entire input to the ADC and not just the particular modulated signal you care about
> In practice, this means you need to band pass before the ADC, right? i.e. "signal" in this case is the entire input to the ADC and not just the particular modulated signal you care about
Right and right.
And, you’d normally want that to be a contiguous 100 MHz band of frequencies (you could in principle have multiple discontiguous bands that add up to 100 MHz if they are spaced right (they don’t fold down to the same base frequencies), but that would be quite an unusual application).
To quote a meme: “That’s the neat part. You don’t.” If you bandlimit your input, aliasing effectively strips out the carrier tone and leaves the modulated signal.
In a way, you’re relying on aliasing / frequency folding to do it for you.
You can even improve information transfer in these scenarios by using a synchronizer, which allows you to phase shift your sampling to be at the ideal transition point in your information stream.
No, this assumption is incorrect. You can ADC first and then demodulate afterwards. The spectrum of your high-frequency (near 1 GHz) signal will be aliased at frequencies below the Nyquist frequency, but it’s easy to calculate the original frequency, if you know that the signal is band-limited.
You do not want to "correct" the wiki because the wiki is not wrong. The person you are replying to is clearly thinking about some sort of RF system (given the frequencies mentioned) where it's important to have a baseband filter to eliminate aliasing, and that filter will have some sort of roll off region, resulting in a higher sample rate than available bandwidth. That's all great, but the Nyquist theorem isn't talking about an RF system. It's referring to sampling. When the wiki uses the word "bandwidth", they mean the frequencies that don't alias given a specific sample rate.
Is the wikipedia page really wrong though? Highest frequency is what the mathematicians care about. EEs care about bandwidth because they're always modulating stuff and thinking in terms of carrier and baseband. Strictly speaking, what the EE grandparent suggested is using aliasing to mix the signal down to baseband.
Yeah but you also need the bandwidth of the sampler to exceed the highest frequency of the sample. Most samplers are limited by some kind of RC time and not their sinc envelope. Most.
Your sampling should really really be twice the bandwidth.
e.g. your bandwidth is 100 MHz centered at 1 GHz (it needs to actually be bandlimited to 100 MHz**). You do not need to sample at 2.2 GHz. You sample at 200 MSPS (really, you should sample a little more than that, say 210 MSPS, so that the bandwidth of interest doesn't butt up against the Nyquist zone edges.)