So that is "432 Mbit/s per laser, and 9000 lasers total". I don't know you guys but I find that statement much more relatable than "42 PB/day". Interestingly, they also say each laser "can sustain a 100Gbps connection per link" (although another part of the article even claims 200 Gbit/s). That means each laser is grossly underused on average, at 0.432% of its maximum capacity. Which makes sense since 100 Gbit/s is probably achievable in ideal situations (eg. 2 satellites very close to each other), so these laser links are used in bursts and the link stays established only for a few tens of seconds or minutes, until the satellites move away and no longer are within line of sight of each other.
And with 2.3M customers, that's an average 1.7 Mbit/s per customer, or 550 GB per customer per month, which is kinda high. The average American internet user probably consumes less than 100 GB/month. (HN readers are probably outliers; I consume about 1 TB/month).
>these laser links are used in bursts and the link stays established only for a few tens of seconds or minutes, until the satellites move away
The way Starlink satellites are in orbit, the same satellites will remain "ahead" and "behind" you in the orbital plane. Those laser links (specifically!) will remain relatively persistent. This arrangement is similar to Iridium FYI.
FTA: "in some cases, the links can also be maintained for weeks at a time"
FTA: "in some cases, the links can also be maintained for weeks at a time"
I think there is a lot of variance. The article also states about 266,141 “laser acquisitions” per day, which, if every laser link stayed up for the exact same amount of time, with 9000 lasers, means the average link remains established for a little less than an hour: 9000 (lasers) / 266141 (daily acquisitions) * 24 * 60 = 49 minutes
So some links may stay established for weeks, but some only for a few minutes?
I would guess that the links between satellites on the same orbit stay for weeks, but the ones that cross between orbits have to constantly re-established.
I believe Starlink (like Iridium) doesn't even try to establish connections "across the seam," ie the one place the satellites in the adjacent plane are coming head on at orbital speed.
This make side-linking easier because the relative velocity is comparatively low, but in general you unavoidably still need to switch side-link satellites (on one side) twice per orbit. Hence 49 minutes: this average must be calculated per connection not per second, so the front/back links (plus random noise) count less, so it only drags the average from 45 minutes up to 49 minutes.
I believe Starlink (like Iridium) doesn't even try to establish connections "across the seam," ie the one place the satellites in the adjacent plane are coming head on at orbital speed.
The slide showing the multiple possible paths traffic can take seems to disagree with this statement?
Impossible to tell from the slide, because such a seam only occurs at one longitude, and moves over the day.
However looking at other sources, it seems Starlink (having more satellites) actually wraps the orbital planes 360° around the Earth (vs Iridium's minimalist 180° configuration), overlapping both North-moving and South-moving satellites in the same sky simultaneously. This means the Iridium seam disappears entirely. Neat! TIL.
Another problem that vanishes simply by being "hardware rich."
Partially! There are also ascending and descending satellites meeting. Ascending and descending doesn't mean altitude but in a "2D view" sense. See https://www.heavens-above.com/StarLink.aspx
Thanks, this is an important point. I missed the fact that Starlink's orbital planes actually cover the full 360° of RAAN[0], not just 180° like Iridium did (presumably to minimize the number of satellites).
So actually this Iridium-type "seam" disappears, meaning that every satellite should always have co-orbiting "neighbors" on both sides. Cool!
Most customers aren't served by lasers, their data goes up to the satellite and down to the nearest gateway. Lasers serve customers out of range of a downlink gateway, and the traffic probably travels the minimum hops needed to get to one.
But with lasers, it makes sense to route your packets via space. For example traffic to a different continent would be faster (and cheaper) through space. Furthermore, I assume lasers have more capacity than gateways, so they could increase capacity of one satellite by bundling with more gateways.
Unfortunately, the routing to make this feasible doesn’t exist yet. Users need a single IP address from a range that’s homed at a single PoP. Starlink doesn’t support user-user connections through the mesh, you need to go all the way out to your PoP, then over to the other users PoP, then back through Starlink to that user.
Are you talking about peer-to-peer connections between two Starlink users, like if they were both in the same satellite's range but separated by a really tall mountain between, etc.?
I thought that Starlink always "landed" to a base station back in the same jurisdiction? I think relaying through space could open a regulatory can of worms.
All countries have strict regulations on radio waves, whether that's sending or receiving. The UK for example requires a license for base stations that stipulates things like geographical boundaries, etc.
You can't freely blast radio waves into a country without falling subject to its varying regulations, but the regulations for "pre Starlink" satellite broadband/phones/etc are fairly well established.
Well maybe it makes sense for US costumer to send their traffic down from Starlink in Canada and then via fiber to the USA? I do not really see the problem if the traffic is encrypted and forwarded.
Men spent 3 hours a day watching TV, and women 2.5 hours. But TV time is lower (around 2 hrs/day) from ages 20-44, then increases again after 45 and peaks at 75 years old at nearly 5 hours a day.
Households without kids watch more TV, which surprised me.
That's a nice find. I think BLS leisure time data is from the American Time Use Survey [1] which I think is asking something similar to this questionnaire [2] on page 22.
I'm not sure that's saying household time. For example, when they survey a household it wasn't clear to me if they survey everyone in the household or just one person. If it's one person then it sounds like they collect how that one person (age 15+) spent their own time and if there were kids in their household.
So then it'd be accurate to say that individuals in households without kids watch more TV as a singular activity (the survey doesn't allow simultaneous activities).
In comparison Nielsen used TV viewing diaries and automated data collection meters. You could have the TV on in the background while doing chores and it would still count.
It's interesting that the 2009 ATUS survey [3] had a 2.82 hour/person average because that's fairly different from the Nielsen data (4 hours 49 minutes/person).
I wonder if this difference is people underreporting in ATUS or Nielsen overreporting or a factor of differences in limitations in ATUS (no simultaneous activities allowed, 15+ age limitation) or Nielsen.
I grew up being accustomed to having the TV as background noise but stopped watching it when I moved out. Now, when I visit my parents, it's honestly quite difficult for me to focus on conversation - there's a machine in the corner making deliberately attention-grabbing sights and sounds. So I think your experience is normal & I empathise with the generation that complained about TV ruining family life.
Same and same and same, but I know exactly why I won't leave the telly on - I'm very susceptible. It grabs me. Even though I have no interest in ads or even 95% of programming. It's not a pleasant feeling.
I heard all those stories about Von Neumann working like that.
According to a biography, his wife once designated a room as his office and he became very angry about that since it was too quiet for him to work there.
Personally I need almost complete silence in order to get anything done, his abilities in this regard always fascinated me.
It's just a different kind of environmental requirement.
It's useful for some people to have recognisable sounds going on while they work, so they have something to latch their focus if they lose it for a second. Whether that be music, or every seinfeld episode on a shuffled loop on the TV.
I have found it useful in the past to listen through every song I have on shuffle while I read, which was nice when I took a few-seconds break every couple of pages and came across a song I wouldn't have picked out otherwise. Alt-tabbing out of a podcast or something completely wrecks my focus on both for some reason though.
I actually prefer to work in a cafe setting, where there is a good amount of non-directed background noise; no one talking to me, but just to each other.
If it's dead quiet, I become hyper-alert to noises, to the point I can't concentrate on working.
If Millennial still = young then yeah, YT or something in background on TV, doing something on laptop (dev, or photo editing or other) and then occasionally phone over laptop as well to reply to chats and stuff.
I would kill for some decent high res wide fov AR glasses.
Happy british baking children! I dont know what its called but it is on netflix and they are indeed happy, british and they bake. Or just put on Asianometry if you need to focus a bit more. I must have been through his back log a dozen times at this point. There is something about that mans voice that helps relax and focus like nothing else
For me it depends on what I'm doing. During working hours I have Soma FM playing at a low volume. Otherwise I'll probably have cooking videos or history documentaries playing as the background noise.
Yep, but that data originates from the providers network and never leave the providers network, so they probably don't count it towards your usage the same way.
I don't think that breaks net neutrality either, which the FCC seems to be reimplementing
All my data usage is over LTE and NR. On one line it mostly gets used for streaming video (YouTube,plex,twitch) and averages around 500GB/mo. I rent a line to a friend and he's doing over 10TB/mo on mostly machine learning stuff and astronomy data.
T-Mobile absolutely counts all data used over the network, my voice lines go QCI 9 (they are normally QCI 6) when over 50GB of any kind of data usage each month, the home internet lines are always QCI 9. I don't have congestion in my area so it does not affect my speeds. This is QoS prioritization that happens at physical sector level on the tower(s).
They absolutely count it the same way. Comcast just gives me a number for bytes used, with a limit of (IIRC) 1.2TB above which they start metering. Our family of four comes dances around hitting that basically every month. The biggest consumer actually isn't video, is my teenage gamer's propensity for huge game downloads (also giant mod packs that then break his setup and force reinstall of the original content).
I think a few hundred GB for a typical cord-cut household is about right.
This obviously has no relevance for starlink which does not have local datacenters for cdn purposes. All that bandwidth is going through the satellites right before it reaches the user.
Definitely sounds like a no-brainer / reasonable next step.
Most ISPs have CND appliances in their racks to save on uplink bandwidth. And from a satellite perspective the uplink (in this scenario: the downlink from the satellite to the gateway) definitely is the expensive bottleneck.
You want to avoid congestion and every bit of caching could be helpful.
Then it comes down to the mass and power budget (and the reliability of flash drives in space) - but that doesn't seem too terrible.
Yeah 1TB seems average for anyone in IT who is really into data.
I'm kinda pissed their is no local ISP competition in my area....and iv tried reaching out to companies with little success...or they say were expanding to your area soon but will not say when.
10GB symmetric fiber isn't hard. Hell I'd use more bandwidth if I could but I'm stuck with no fiber atm
I’d have guessed they count “delivered bytes” not “transmitted bytes” and then you need to take into account each leg of the transfer. Which for starlink is at least two (for the simple bend pipe situation) and up to potentially something like ?20? (for a “halfway around the globe, totally starlink” connection). The latter is probably statistically negligible, but even the factor two would give ~2% utility. Which, taking into account, that at least 2/3 of the orbit time is spend out of reach of anywhere useful, this would give something like 1 in 10 possible bytes being transmitted. Which is much better than I’d have guessed if asked blindly
Is resolution going to peak? Like speeding on a highway are there diminishing returns? On the other hand, bandwidth availability seems to also drive demand...
Resolution is always determined by angular resolution at viewing distance, even for analog TVs(they were smaller and further away), and also,
Videos on Internet is always heavily compressed - the "resolution" is just the output size passed to the decoder and inverse of minimal pattern size recorded within, technically not related to data size. Raw video is h * v * bpp and have always been like low to dozen Gbps.
Just my bets, the bandiwth may peak or see a plateau, but resolution could continue to grow as needed for e.g. digital signage video walls that wraps around buildings.
Sure, but "4k" is still being used as a differentiator for streaming companies in how much they charge. Even then they serve up some pretty compressed streams where there's room to do less of that for a noticeable notch in quality.
There's of course a limit. The "native" bitrate equivalent of your retina isn't infinite.
Next step though is going to be lightfield displays (each "pixel" is actually a tiny display with a lens that produces "real" 3D images) and I assume that will be a thing, we shall see if it does better than the last generation of 3D TVs/movies/etc. That's a big bump in bitrate.
There's also bitrate for things like game/general computing screen streaming where you need lots of overhead to make the latency work, you can't buffer several seconds of that.
The next gen sci-fi of more integrated sensory experiences is certainly going to be a thing eventually too. Who knows how much information that will need.
When more bandwidth becomes available, new things become possible, sometimes that are hard to imagine before somebody gets bored and tries to figure it out.
When I'm futzing around with ML models, I'm loading tens of gigabytes from disk into memory. Eventually something like that and things orders of magnitude larger will probably be streamed over the network like nothing. PCIe 4.0 x16 is, what 32 GBps? Why not that over a network link for every device in the house in 10 years?
There is one key issue of keeping lasers aligned for long durations between satellites and even between a satellite to a ground station. There are vibrations in satellites and even a tiny bit of that vibration translates to beam misalignment. Am not an expert though. That could explain the bursts.
So it's hard to sustain the theoretical 100GPS connection for hours let alone days across 2 end points which are in constant motion.
That means each laser is grossly underused on average, at 0.432% of its maximum capacity. Which makes sense since 100 Gbit/s is probably achievable in ideal situations (eg. 2 satellites very close to each other), so these laser links are used in bursts and the link stays established only for a few tens of seconds or minutes, until the satellites move away and no longer are within line of sight of each other.
I think I agree that each laser is grossly underused on average, but if you read the article, there's quotes about the uptime of these links. They're definitely not just "used in bursts [of] a few tens of seconds or minutes".
> That means each laser is grossly underused on average, at 0.432% of its maximum capacity.
Don't forget that every communication protocol has fixed and variable overhead.
The first is a function of the packet structure. It can be calculated by simply dividing the payload capacity of a packet by the total number of bits transmitted for that same packet.
Variable overhead is more complex. It has to do with transactions, negotiations, retries, etc.
For example, while the theoretical overhead of TCP/IP is in the order of 5%, actual overhead could be as high as 20% under certain circumstances. In other words, 20% of the bits transmitted are not data payload but rather the cost of doing business.
Starlink's big investor and launch customer was US Air Force.DoD had long complained about lack of fast sat comms, it's also why they effectively own Iridium.
So in addition to households add foreign bases and possibly drone command networks to possible sources of traffic going fast enough to warrant sat-to-sat connection.
My parents moved in and, being old, stream TV all day (instead of cable) and end up using about 40 GB per day with 1080p. We keep hitting our max of 1.2 TB set by our cable company (because there are others in the home!).
I should probably see if my router can bandwidth limit their mac addresses...
> And with 2.3M customers, that's an average 1.7 Mbit/s per customer, or 550 GB per customer per month, which is kinda high. The average American internet user probably consumes less than 100 GB/month.
The average household probably watches significantly more tv than HN users. That is almost all streamed - something like 6 hours per day times multiple TVs.
There’s probably redundancy in the links. In other words, A sends a MB to B which sends it to C, that’s 1 MB of information transmitted to customers but 2 MB of laser transmission.
I'm seeing about 6Mbps per customer during peak hour on my own network, so 1.7Mbps over a longer period of time sounds like it's in the right ballpark.
Thermal management is also a tremendous problem in space. All power generated must be radiated away, and satellites effectively sit inside a vacuum insulator.
I'd be interested in what the sustained power/thermal budget of the satellites is.
Where did you get that 100GB/mo number from? 4K streaming eats up data transfer quickly. Comcrap & friends knew what they were doing making arbitrary data caps that sounded like a big number at the time. Wireline data caps should be illegal.
And with 2.3M customers, that's an average 1.7 Mbit/s per customer, or 550 GB per customer per month, which is kinda high. The average American internet user probably consumes less than 100 GB/month. (HN readers are probably outliers; I consume about 1 TB/month).