Since the linked article is somewhat ambiguous about this, and other commenters appear to be getting confused about the purpose and value of the laser links as well: these laser links are purely intended for satellite-to-satellite communications for Starlink. They are not (at least at this time, and for the foreseeable future) intended for ground-to-satellite communications.
The value that sat-to-sat laser links provide is that they create a low latency, high bandwidth path that stays within the Starlink satellite network. Before these 10 satellites, each Starlink satellite has only been capable of communicating directly to ground terminals (either consumer, transit, or SpaceX control). For traffic that is intended to move large geographic distances (think transcontinental), this can require several hops back and forth between ground and space, or the traffic from the user terminal is exited at a node that is geared for transiting traffic and most of the data transits along existing ground Internet links.
By performing this type of transit directly in space, and exiting at a transit node nearest the destination for the data, you greatly reduce latency. Bandwidth still might not be great, but what this does is unlocks a very financially lucrative consumer use case: low latency finance traffic and critical communications. There are many use cases around the world where shaving even 10-20 milliseconds of latency on a data path can unlock finance and emergency capabilities, and this is a long fought battle throughout the history of these industries. As an example, if you got a piece of news about a company in Australia, and wanted to trade on it as quickly as possible in USA, if you can beat your competitors by 10-20 milliseconds, that can mean a lot of money.
Laser comms for Starlink sats have long been planned, but have historically proven to be quite hard to get working. They also depend on a sufficient critical mass of satellites so that a given sat actually does have another sat within lock to send the traffic towards.
The telemetry and tracking itself for the laser links. All the satellites need fairly accurate orbital information from all other satellites; there will be several thousand satellites capable of performing autonomous orbital adjustments to avoid debris and for their own station keeping. There has to be a convenient and reliable fallback for automatically syncing satellites with the rest of the network after a reboot or loss of communication.
The packet network needs to maintain an efficient routing table and provide low packet loss to end-users. My guess is that the only practical choice will be establishing a large number of point-to-point channels with their own retry/error-correction management. TCP/IP isn't capable of dealing with more than about 1% packet loss and I doubt raw optical/wireless links will be reliable enough especially with topology changes, requiring extensive store-and-forward hardware to buffer transmissions waiting for a reliable route or de-duplicating multicast packets at the receiving end. Reassembly and de-duplication on receipt is theoretically feasible; if a tight reception window can be maintained then specialized hardware could filter duplicates sent over 2 or 3 redundant routes to a local receiver without increasing observed latency or using unreasonable amounts of RAM. A combination approach could allow a tradeoff between using double or triple the raw bandwidth but providing low-latency reliability vs. higher latency with retries while maximizing throughput depending on the current network load. It could also be hardcoded for different traffic classes.
I'm not involved in this problem space in any technical capacity but it sounds like a very fun set of problems to solve.
SpaceX continues to genuinely feel like a practical intellectual's playground. They really do have so many fascinating problems to work on.
Accurate orbital information might not be necessary or possible, if you can perform broad scanning that can quickly lock in your given target, but it certainly doesn't hurt. The issue is that even having an orbital track, because of the sats being so low in LEO, atmospheric drag will change your orbit rapidly enough that this information can get out of date quite fast.
Convenient and reliable fallbacks I feel are largely a solved problem for SpaceX. They've built their Starlink bus by reusing a lot of the software from the Falcon 9, Dragon, and Starship programs that already had to handle even greater levels of reliability.
Routing and retransmits are indeed quite a novel area for this kind of service. But the laser link will only be locked on one other peer satellite at any given time, and I don't think they plan on reorienting the sats in order to establish laser comms because of the effect it would have on drag as well as albedo. The former impacts service life per sat, and the latter has been a big rallying cry for Starlink opposition due to the impact it has on astronomy and the visible sky.
So if you likely can't reorient to retransmit, your only options are the peer laser link or the ground transit exit node, either or both of which might not exist, but I feel like you would just ack the packets on each hop to your peer and leave it at that.
SpaceX initially planned to have 5 laser links per satellite but settled with 4. That's 4 separate lasers maintaining contact with 4 neighboring satellites.
The two in front and back are nearly stationary. The sides are constant moving though.
It's not clear to me that laser links would require reorienting substantial portions of the satellites mass. From a theory standpoint the only thing that would need to be moved is the lens, and even that may not be necessary[1]. While I'm not familiar with the intricacies of point to point satellite communication, I'd venture that it's within physical possibility to maintain fiber like levels of bandwidth over large distances in LEO.
While a major leap, Starlink could potentially service internet backbone traffic at substantially lower latencies in such a world.
> A combination approach could allow a tradeoff between using double or triple the raw bandwidth
You can control the overhead while increasing recovery rate over multiple paths fairly well, with non-integer overhead (>1x but <2x, etc) with erasure coding techniques. Some erasure codes can allow you to control that overhead (code rate) precisely, but have legal imbroglios: https://en.m.wikipedia.org/wiki/Fountain_code
I am curious about the overhead on a packet network for erasure codes. I am guessing they're easier to use on a channel oriented network where a small number of senders and receivers are protecting a multiplexed packet stream. Additionally I'm guessing the most effective use would be 2D encodings like optical media use to spread the codes out over time (for runs of bit errors) and space (single bit errors) which would introduce another latency/efficiency tradeoff.
If you have bursty bit errors on a link, erasure codes can correct them without a round trip while not adding latency in the case where a burst of errors didn't occur.
Having said that, I don't think this situation will apply to spacex. They probably won't get random bursts of errors. Noise levels will be very deterministic and can be calculated ahead of time by looking at what stars are "behind" the satellite.
Exactly, there was a great animation posted here once that showed the hobs with and without lasers. Given that there are examples of 44+ terabits[1] being transmitted optically I am not too worried about the bandwidth.
I'd love to be on the Starlink team, they are building some really cutting edge stuff. There are not many places or times where you can have such a big impact on the world and their team happens to be one of them. Good times.
Indeed, what a fantastic skill both in animation but also in communicating a complex concept. I missed this the first time around, so much appreciated.
This seems overly ambitious, and not likely to translate to laser satellite comms links. Remember these sats are in low LEO, so there is at least some amount of non-uniform atmospheric medium to create optical distortion that you wouldn't have in a tightly controlled environment from your link (chip, datacenter, what have you).
Then there's the issue of signal attenuation at distances like these, along with the power budget from the solar panels to ensure that you aren't burning all your power just on laser comms.
I can't imagine these links would be anything more than 1:1 at any given time, at least not at first. Maybe later they might be able to handle simultaneous laser links from 1:3 or something like it, but I highly doubt that's their current capability.
Even some of the best latest ground-to-sat laser comms links are on the order of ~7 gigabits per second. [1] I imagine those are far larger sats than Starlink, and their entire power and thermal budget is likely spent on comms. And the ground stations can be orders of magnitude larger and power hungry in relation to a Starlink sat. Note also that this is from GEO, so you aren't likely having to handle substantial relative movement either.
The link between neighboring satellites in a 22-satellite-per-plane constellation at 550km altitude comes at its closest 480km above the ground. The link between next-farthest-neighbors comes at its closest 270km from the ground. The link to the second-farthest-neighbor is not in line of sight. Optical effects at 270km or 480km altitude are negligible; 99% of the atmosphere's mass is below 30km altitude. In theory you could go all the way down to an 8-satellite plane at 550km and still maintain line of sight and freedom from most of the atmosphere's interference.
Satellite-to-ground optical laser links are another matter, but you can always route a few hops around weather systems.
Indeed, why not! We have an absurd number of software job openings now, both on the Starlink network side and the flight software / simulation side. Bonus points if you have a top secret clearance.
I imagine ChuckMcM's reasons may be similar to my own: when last I checked, SpaceX didn't have any positions open for Bay Area residents unwilling to relocate.
But I do see on the careers page there's now a (single) Palo Alto based Software Engineer position open.
Well there are a couple of reasons one might not, but the biggest one is that given people with a wide range of experience in both individual contributor and management roles it is often difficult to figure out how to match the skillset with the role. Send me an email (profile has it) and I'll send you a resume you can dump into the ATS system to see if anyone thinks I'd be a good fit. (And no, I don't send my resume to head hunters to shop around).
The second challenge is that I am in a position to be reasonably frank when the biggest threat one can hold over me is to be fired. I have found that some managers don't appreciate employees for whom their "hire/fire power" over them doesn't give them any leverage. This combined with a general perception that I am a good manager, means that I tend to develop a followship whether or not I'm in a leadership role. This can lead to "issues" no matter how hard I try to prevent them.
It is possible to get ITAR authorization without being a US citizen or permanent resident. However, it is a long and arduous process, and even then those employees are restricted from some information. For that reason, it probably won't be considered unless you are very highly qualified.
If I were you, I'd read up on ITAR qualification and, if you think you would be eligible and are highly qualified, just apply.
I think the other important use case is that sats without a ground station in view can backhaul traffic to another set of sats (for cruise ships or remote islands).
even just the ability to hop 1 satellite can greatly extend the range of one spacex earth station. right now, as the parent poster mentions, the moving LEO satellites need to be simultaneously in view of the CPE antenna and a spacex earth station.
and also relatively overhead of the CPE antenna, since the starlink customer terminal is a phased array that does dual beamforming in a 'cone' of view directly above it. just because a satellite is visible 5, 10 or 15 degrees above the horizon from the POV of the CPE doesn't mean it can talk to it. the system is definitely reliant upon a fairly high satellite density.
with the ability of the satellite that's generally overhead of a starlink earth station to talk to the satellites immediately behind, and preceding it in its orbital plane, and then those two additional satellites to talk to the CPEs underneath them, the possible coverage area can be greatly increased.
3D visualization of starlink orbits and coverage footprints:
I had a friend that worked at the south pole for a year. They had a few hours a day they could access their main communications satellite. Something like this could be a game change for remote research stations..
I think Starlink sat to Starlink sat links will also help bandwidth, specifically in the case where a given downlink connection is heavily utilized, the system can shunt traffic to another slightly less ideal but less utilized ground station.
> unlocks a very financially lucrative consumer use case
That's a bingo! I think there's a pretty good chance that this could turn into a pretty epic cash cow for Starlink.
I do wonder how much of a draw Starlink will be versus terrestrial microwave for things like HFT firms. Guess it will do really well for linking further afield exchances like Hong Kong and NYSE where a straight line microwave route isn't practical so the extra distance added by the orbit doesn't matter as much.
a lot of serious HFT for transcontinental stuff moved to HF radio some years back, with big-ass yagi-uda antennas aimed between locations like the CME datacenter, london, tokyo, new york.
it's very low data rate but also guaranteed lower latency than the submarine cables.
OK so even that will be hard to break into except as a wider bandwidth option which would have some applications the HFT, but it'd be hard since latency is king there.
Maybe with that out of the picture other uses like telepresence for remote surgery can edge in. I've been waiting for a while for that to really take off seems like an awesome way to provide services you couldn't afford to to remote places.
I have read that Starlink, given sat to sat relays, should be able to beat any possible ground based system as long as the distances are great enough. Does that match your understanding of this?
My understanding is it's tough to beat point to point microwave arrays for directness in a lot of cases, they're already pretty close to the straightline path between some locations. The same is basically true for the longer range low bandwidth links that curve around the Earth.
Starlink is at a disadvantage because fundamentally it's pathing across a sphere with a larger radius so even when the path is direct across the Starlink cluster it still has to traverse a longer distance than the equivalent path on the ground. Adding to that the path won't always be directly across the surface of the constellation, depending on where you are and where the destination is you'll find some pockets where the link has to zig zag across the constellation slightly adding to the additional link distance. This [0] isn't necessarily a perfect representation but it does show how starlink's linking may work and how it's not a perfect net. It also doesn't account for the acquisition times for new laser links which were on the order of 20 seconds in another laser sat link test.
I do terrestrial point to point microwave/millimeter wave stuff professionally, so it depends what you're trying to search for. Microwave isn't really used for any significant portion of traffic at "long distances" anymore, like it was in the days before singlemode fiber. But it very useful in places where you want to reach a very rural area or go somewhere that would be cost prohibitive to build to with fiber.
There are atmospheric and practical effects that make going more than 50-60km with PTP microwave, and data rates above 700-800 Mbps, increasingly costly as the distances increase.
It is totally possible technologically to build very long chains of many microwave PTP radio hops, as some high frequency trading companies set up between new york and chicago, but the aggregate throughput and capacity is minuscule compared to fiber.
Would be happy to answer any more specific questions.
Here's an example of a current state of the art point to point microwave radio, for use in the common FCC licensed part 101 bands (6, 11, 18, 23 GHz, etc)
a lot of theory and conjecture about sat-to-sat laser relays has been published by people (and youtube videos made, etc), but many of these theoretical topologies are dependent on plane-to-plane communications which will be difficult to aim and maintain links on.
if you google 'starlink train' and look at some videos, when a batch of 60 starlink satellites is launched ,they're all at the same orbital inclination. they remain at the same inclination but as the weeks go past after their launch, they are spread out to follow each other at several hundred km spacings. but they're still following each other in what is basically a strung out conga line of satellites. communications between forward/rear satellites in the same batch should not be nearly as difficult to aim.
what would be difficult to aim and maintain links on would be cross-plane links between two different sets of satellites, with very high differing relative velocities.
This video goes a bit into the probable topology of the links and gives a good look at how the eventual constellation would look. Eventually in the full constellation there will be 4 easy links to the birds in front, behind, left and right, those 4 will stay in largely the same relative position until they reach the highest latitude of their orbit. Those alone will make an ok link with a bit of a zig zag. The tough ones are those ~180 degrees out of phase those will be crossing roughly perpendicular and at a really high speed.
It's online already (still labelled "beta" but works with interruptions of only a few minutes per day) in certain latitudes (as far south as Rome 41.8° N-52.1° N as far north as The Hague), check the starlink subreddit for reports.
I did not find anything suggesting it'd be used productively for the satellite if it manages to deliver.
Do you have anything about that you can tell/link?
I'd be curious what the initial design weight was, before the thermal expansion problem that caused the aluminium block.
As for the data rate, it seems a mechanically simplistic fold-out mirror (released during deploy, say with a current pulse into a shape-memory-alloy torsion spring to delay the unfolding until after the ejection, or just a simple friction break and a normal torsion spring) could significantly decrease the beam width when made from e.g. zerodur and fabricated as an offset parabolic dish. Slightly modifying the refractive collimating optics of the current design should make that approach possible.
I'd expect vibration in space to be a negligible issue, so an edge-mounted mirror should be stiff enough.
Not just low latency. Without satelite-satelite communication, both grounds stations have to be able to see the same satellite.
In GEO that's not a problem - a lot of the planet is in sight. You have a ground station in New York and you can bounce off a satellite over the Atlantic and land the signal in Nigeria just fine.
With Starlink the orbits are really low, so the distance to the ground station is low. That's fine if you are in the backwoods in Washington and bounce to a receiving station 100 miles away in Seattle, it's no good if you're at sea, or (in this case) at an Antarctic station -- one which can't even see GEO satelites.
big chunks of antarctica where most of the research base population is located can see geostationary, just at a very low look angle. the whole peninsula, mcmurdo and places at similar latitudes to mcmurdo.
it's the pole that has problems seeing geostationary, and has up until very recently been totally dependent on 7-8 hours of coverage a day using big tracking antennas to talk to old geostationary satellites that have wandered somewhat out of their original orbits, so that from the POV of the earth station at pole, they can be communicated with part of the time.
Not sure if they'll get a Starlink terminal down there this season, especially given COVID-19 restrictions in New Zealand.
Most data and internet from pole is via ~4 hours of DSCS a day, ~4 hours of skynet (slowww...) or when TDRSS decides to gives South Pole time (but in small chunks of time).
I usually try to ssh in my experiment at Pole the start of DSCS time, which unfortunately can be in the middle of the night for me.
The US DoD and Soviets, Russians somewhat solved this problem by putting molniya/tundra orbit communications satellites into orbit for narrowband data and voice, text comms. Has been a thing for 35+ years.
But there has been nowhere near enough money available to do something like set up a pair of molniya orbit satellites with long apogee dwell times over the center of antarctica.
Sure, but that pittance of bandwidth is a far cry from having persistent 24/7 access to a high bandwidth, low latency service like Starlink. Long boring months spent in the darkness with very few recreational outlets now all of a sudden become a lot more attractive with access to the open Internet and your loved ones with no restrictions.
I can only imagine. For the Amundsen-Scott station people I imagine it will be revolutionary.
Assuming that the topology will be additional highly-inclined/polar orbit starlink satellites, communicating with each other in a laser chain, this will be dependent on starlink setting up earth stations near fiber somewhere in southern Chile, Argentina or in South Africa. Or south australia, tasmania, new zealand.
That's true, though real world performance is often more like 350kbps. The Iridium NeXT series satellites and second generation network are a fairly recent thing to go into production use in the past couple of years, before that, an individual iridium terminal was limited to about 2400 bps with v42bis compression. There were some famously weird multilink PPP bonding solutions used in Antarctica with four separate Iridium modems and antennas.
Aside from the fact that Starlink connections are or two orders of magnitude better in throughput and latency, the biggest problem with Iridium is the cost. It's just simply out of the realm of affordability just for the likes of streaming some videos or playing games in their off-time.
Now that I think about it though, the Starlink user terminals likely won't be able to survive the harsh winters at the poles. -30 deg C and low tolerance for high winds means this would be tough unless some type of non-obstructive sheltering could be provided.
I imagine that compared to the fully-loaded cost per hour of Amundsen-Scott station just to exist, be periodically resupplied, pay salaries of staff, buy equipment, transport logistics...
Even if you maintained multiple $1.25 per minute (or $5 per megabyte) Iridium links, operating 24x7x365, it would be a teensy tiny drop in the bucket compared to the total budget of the station.
The field of view from the satellite is 5000 km, though reception probably gets a lot worse at lower angles for a multitude of reasons. Hope I calculated right.
the starlink beta test customer terminals only have a cone shaped view of suitable beam forming ability and gain above them, so the 4000 km is a lot less in usable practice. they can't talk to a satellite that's 15-20 degrees above the horizon for instance, it'll only work when the satellite rises higher in its general field of view.
The horizon limitation is primarily regulatory.
Given that limitation, it doesn't make sense for them to extend the cone closer to the horizon than their flat phased array can do.
Indeed, a flat phased array can only have a certain limited gain at elevations and azimuths far away from the direction it's pointed in. One of the reasons if you ever see a flat phased array radar inside the nose radome of a air superiority type fighter jet, they're not fixed but also can be mounted on a sort of pan/tilt platform for steering.
The real value is not super low latency communication, but rather airplanes and ships out of reach of normal base stations.
I have seen no evidence that traders will be the primary users. Do you have any evidence or cases where they are doing or planning this. Are you just guessing?
Every single high-speed trader's ears perked up instantly when news of this came out. They already spend huge amounts on microwave links overland between e.g. Chicago and New York, for just a few milliseconds advantage over fiber.
Starlink gives them the same opportunity across Atlantic, Pacific, and Indian oceans. You can bet that there will be orbits that exactly link London and New York, New York and Tokyo, New York and Singapore, London and Singapore, etc. Traffic not sent under extra-high tariffs will be artificially delayed enough milliseconds to match fiber.
It would not be surprising if these low-latency contracts, for traffic amounting to well under 0.1% of total capacity, provide 10% of revenue.
It would also not be surprising if future satellites have tens of TB of storage onboard to proxy streaming for the highest-demand shows of the moment, and broadcast capability to multiple ground stations simultaneously for live streams, particularly soccer matches.
Most likely, aside from financial transaction data, routing will always offload packets to ground terminals as early as possible. I strongly doubt zigzag routing will happen at all.
In my space especially with video conferencing people are willing to spend a premium for lower latency for audio / video conferencing links. So there may be a market there (if you can backhaul phone, video, audio conference traffic) where even 10 - 20 ms in savings may be noticable.
Another may be gaming.
Another is as you say planes and ships out of reach of ground stations (very sparse situation however).
But I too was interested to see if they had an actual idea in mind?
I would think the computer game market. Maybe other markets that have high end mature users where client side is very good so lower latency might be something that matters relative.
if your potential adversary is a five eyes nation state intelligence agency with the ability to sniff traffic from 10/100/200GbE submarine fiber DWDM links, the fact that spacex/starlink is also a US company and subject to national security letters and other monitoring won't help you.
At some point, the communication has to enter the terrestrial network. That entry node is likely to be under the influence, if not outright control, of Five Eyes security infrastructure.
You would need to build out a lot of infrastructure to escape the sort of ubiquitous surveillance that is available to those organizations.
> At some point, the communication has to enter the terrestrial network.
Not necessarily. If you're communicating between two SpaceX terminals, that could feasibly avoid SpaceX ground stations. I don't know how capable SpaceX's routing actually is to know if this is within their scope though.
and then we come back to one of the original missions of the NSA, which was to intercept international telephone call/data/telex traffic that had no terrestrial intermediary, between two different C or Ku band geostationary earth stations in two different locations of the world which were using a geostationary transponder as a relay point.
If it's flying through the air and somebody really wants to grab it they will, whether they can read it depends on how good your crypto is.
Can’t anyone (well, anyone with state kind of money) observe the laser and at least dump it with perfect accuracy to keep for later decryption? I’d think tapping n undersea cable is more work than observing light going from sat to sat? I’m just spitballing here, I have no idea what’s possible or not but I’d be curious to know.
> Can’t anyone (well, anyone with state kind of money) observe the laser
LEO is far from a true vacuum. But it's sparse enough that you have limited beam scattering. Getting a read on a sat-to-sat laser emission from the ground is nontrivial.
What about scatter from the satellite itself? The laser can't be perfectly absorbed on the receiving end. I haven't done the calculations, but I fully expect the spot size is much larger than the receiver.
That said, I imagine that for any serious bandwidth, you won't get enough SNR to decode anything from the ground.
Remember that the only reason a laser works for this kind of link is that it is very highly collimated. The light scattering from the receiving satellite is diffuse so the intensity will go down very quickly with distance. It might be observable with a suitably large telescope, but I doubt it would be practical.
So you have a reflection off random bits of equipment followed by a trip through most of earths atmosphere? That doesn't sound easy at all - and they could choose wavelengths that won't make the trip well if at all.
I guess if you got an observation platform up closer that would help.
More like farther away. Put a spy satellite in a high orbit like traditional satellite internet and it'll be able to see half of Starlink simultaneously with essentially no distortion.
If they were strongly encrypted there may be no reason for a nation state to observe the intersatellite links, but I wouldn't expect this at least for the near future since Elon's focus will probably be to cut corners to get the absolute best performance possible.
There are multiple layers of encoding to mitigate this. CCSDS [0] is very widely used for satellite communications, and a Google image search for CCSDS protocols [1] will give you a quick view of the protocol stack (which can vary a lot as different CCSDS protocols are used in different applications).
I think it would require someone to fly a spy sat up near one of the SpaceX sats. From the ground the atmosphere would attenuate the signal too badly and you'd only have visibility for a few minutes at a time as it passes overhead maybe.
In practical terms I don't think it's a major concern. Much more likely that the spy agencies would tap the lines coming out of the ground stations.
> Much more likely that the spy agencies would tap the lines coming out of the ground stations
Which is a significant step backwards for e.g. Russia.
Currently, all data between Asia and North America runs on cables. Anybody can spy on those. If those data did satellite laser hops, only those with ground stations would have access. Everyone else gets locked out.
The company I work for changed its internal routing of data to use encryption for 100% of traffic crossing the public internet. I have to think everyone else is doing this as well. Is spying on cables like this even useful? How would ay usable data be extracted?
Military inter-satellite links deliberately choose laser wavelengths that do not propagate well through Earth's atmosphere. Such links could only be intercepted from a space-borne platform. I have no idea what laser wavelength StarLink is using, so it may not apply here.
I have no doubt that (like every other geo-mobile satellite constellation) the system allows for "port mirroring" on point-to-point links, or (more likely) relays everything through a ground station even when doing so has performance implications. This allows them to fulfill their obligation to government(s) surveillance.
Also, each starlink satellite can use beamforming to very rapidly send narrow beams down to Earth and switch quickly, hundreds of times faster than steering an antenna mechanically.
They can't (as far as I know) use beamforming with lasers, so they can't be steered to different terminals as quickly.
Not quite. In order for a laser sat to talk to another sat, the second sat needs to also have laser comms hardware on it. Given that these were the first 10 to boast of such hardware, initially they'll only be talking to each other. Over time, as Starlink pitches more sats into orbit with laser comms, they will have more and more peers to talk to.
As the sister comment implies though, Starlink sats are in low Low-Earth-Orbit (LEO) so they experience non-zero drag from the atmosphere. This slows them down, causing them to gradually fall back to Earth, creating a natural expected service lifetime for each satellite in orbit.
So the expectation is that SpaceX will have to continually put up more and more replacements indefinitely as older sats decay and burn up in the atmosphere. These 10 laser sats are just the latest among a string of sats that are yet to come online with newer and better capabilities.
I question how low latency these links will be, given the sheer number of hops (and thus queues and transceiver modulations) required to span the globe. And, of course, every round trip requires four trips through the atmosphere. Then you have the Manhattan distance of the hops. Maybe dedicated QoS flows along orbital planes stand a chance, not so general Internet traffic.
There are two different explanations that I've read over the last few months for why we didn't see laser links on previous rounds of Starlink satelites.
First, that the unit pricing was too high. SpaceX is targeting an extremely (absurdly) low price of $250,000 BOM per satelite, and it was theorized that the laser links were blowing the budget. Estimated that it would cost ~$100k for the lasers, targeting, and electrical systems to add the links.
Second, I've read that they had issues guaranteeing that all the components of the laser system would fully burn up in the atmosphere. One of the conditions of launching low-orbit satellites is that they will fully burn up on re-entry (therefore not posing any risk when they fall back down to earth). Apparently some of the optics components had a chance of survival and ultimately possible land-impact.
I'm not sure if we can say that this launch indicates that the cost issue has been solved. It could be worth blowing out the BOM to get some operational experience having a few birds with the lasers, so perhaps they haven't fully solved the pricing issue unless we start seeing lasers on every subsequent launch.
TFA has a great animation of a polar orbit. It's basically a longitudinal orbit, so it will absolutely pass over land for a lot of the time, so clearly they must have at least solved the burn-up issue, if in fact that actually was ever an issue in the first place.
Also, FYI, there was a scheduled attempt to launch SN9 today, but it just got scrubbed a minute ago due to winds. They will be trying again tomorrow! reddit.com/r/spacex is a good place to watch for updates: https://www.reddit.com/r/spacex/comments/krllbt/starship_sn9...
There are multiple reasons, from demisablility to BOM cost and also just that the tech wasn’t quite ready at first and they wanted to launch ASAP to get that minimum viable product up and running (besides their FCC licenses have a time limit for deployment... if they don’t deploy their network fast enough, they can lose access to the spectrum they want to use). They also wanted to be able to bid for broadband subsidies, etc.
Lots of good reasons they just launched before the laser links were ready. Also, they don’t necessarily need them except for really, REALLY remote customers (ie a minority), so they can sprinkle in lasers into their constellation as they become available.
I've seen the "burn up in the atmosphere" constraint before, and it struck me as a bit crazy for Shoot First and Ask Questions Later SpaceX to care about this
> I've seen the "burn up in the atmosphere" constraint before, and it struck me as a bit crazy for Shoot First and Ask Questions Later SpaceX to care about this
SpaceX has never been even a little careless about following laws, regulations and staying within the lines of their FCC/FAA authorizations. There is this reputation surrounding them, probably because of their visible and often explosive development practices, an because Tesla absolutely does play fast and loose with regulations, but for SpaceX it is not just entirely baseless, it's sort of the exact opposite of the truth.
When FCC tells SpaceX to jump, SpaceX asks how high on the way up. Demisability standards were among the constraints of their license, so SpaceX made sure to not just meet the requirements, but well exceed them, which is what they always do.
Why Musk likes to flout all rules at Tesla and yet follows rules strictly at SpaceX probably as a lot to do with how the regulators are generally pretty toothless about doing anything to Tesla, while either the FCC or the FAA could pretty much shut SpaceX down for as long as they like should SpaceX try to play cute with them. Enforcers with actual ability to enforce rules tend to be respected more.
SpaceX absolutely does not follow fcc guidelines. There have been numerous occasions, including changing all their satellites to 550km altitude without getting proper approval where they do it and ask for forgiveness later.
> SpaceX absolutely does not follow fcc guidelines. There have been numerous occasions, including changing all their satellites to 550km altitude without getting proper approval where they do it and ask for forgiveness later.
No. You have been misinformed, they have always asked for the authorization first. For the orbit modification to 550km, they asked on 2018-11-08, were granted permission on 2019-04-26, and started launching a month after that. [0][1]
No, the point is you aren't allowed to do that. The initial filing for their satellites in the frequencies that they were requesting were for a certain altitudes. Without a brand new filing (not an amendment). They didn't do that.
This is an argument that has been forwarded by Starlink competitors. FCC vehemently disagrees with it.
I would rather go with FCC's interpretation on this, rather than competing commercial entities who have a vested interest in one outcome. If FCC didn't like the amendment, they could have just said no.
Everyone else had to play by the same rules until SpaceX doesn't. You say it's fair, but certainly if you were in that industry as a competitor it appears as favoritism. They also lie to push things through, like rate of decay, speeds offered, latency.
> Everyone else had to play by the same rules until SpaceX doesn't.
> You say it's fair, but certainly if you were in that industry as a competitor it appears as favoritism.
No they didn't. When this was publicly said by a competitor when the challenge to the amendment was filed, and exasperated FCC official literally said publicly that anyone is, and has always been, able to do things the way that SpaceX is doing, and in fact FCC prefers it. The only reason most companies have not been doing it this way in the past is that SpaceX ends up paying a lot more in filing fees. The argument that SpaceX doing this is somehow favoritism is harebrained. What exactly is stopping the competitors from doing the same?
Whether it's fair or not is also entirely beside the point. That was not what I was arguing at all.
> They also lie to push things through, like rate of decay, speeds offered, latency.
No, they don't. The rate of decay calculations in SpaceX filings have been based on the models FCC expects licensees to use. Yes, they do not perfectly model reality, but they are what FCC wants, and what everyone else uses too. As for speeds offered and latency, well, actual latency as realized in the beta seems well in line with what SpaceX has promised, and while they so far only offer a single speed grade to beta testers, they have demonstrated speeds in line with their original claims to the air force, and presumably will also offer those to customers (at a much higher price point) some point in the future. This is also entirely beside the point.
To be frank, right now you sound like you have read too many dishonest SpaceX hitpieces, synthesized the idea that SpaceX is ran by cowboys who habitually ignore the rules, and are grasping at straws to support it, including quite a bit of moving the goalposts.
I have made a single, factual claim: SpaceX meticulously follows the rulings given out by FCC and FAA. I spend quite a lot of time spelunking FCC filings looking for information about satellites, and I honestly believe it to be true. Can you point to a single actual counterexample? Not "competitor is angry about FCC ruling and claims FCC shouldn't have done something", but an actual FCC ruling or regulation on the books, that SpaceX proceeded to break. Because I do not know of a single example, and that is actually rare for a company in the industry. If you believe such an example to exist, please post it, preferably with a link to the actual ruling they are breaking.
I'm not sure what world you're living in, but the failure rate so far has been much higher than what Musk has said. Is it bad? They said 1%, it's 5-6%. Should we count the early satellites? Why not? The newer ones haven't been up long enough to tell how reliable they'll be. He also said they want latency < 20ms on Starlink. Again, not going to happen because of physics, but that didn't stop people from repeating it over and over despite the service being 40ms+ on average. Sure, you CAN hit 20ms if you are next to a POP and your endpoint is a single satellite hop, but that's not the norm.
To be frank, it sounds like you don't understand the industry very well, and that's ok. But not everyone forgets what was said in 2015 when the project was launching. Since you asked, here's one non-truth:
>High capacity: Each satellite in the SpaceX System provides aggregate downlink
capacity to users ranging from 17 to 23 Gbps, depending on the gain of the user terminal
involved. Assuming an average of 20 Gbps, the 1600 satellites in the Initial Deployment
would have a total aggregate capacity of 32 Tbps. SpaceX will periodically improve the
satellites over the course of the multi-year deployment of the system, which may further
increase capacity.
This ignores A) competitors who have first priority on the spectrum, B) 80% of the satellites are over water and completely unusable during that time, and C) more recently we know that the user terminal at best would perform lower than the 17Gbps low end they cite due to cost-cutting on the scan and G/T of the antenna.
If you believe it's okay to lie in FCC filings so that you win awards and change policies, then that's a valid opinion to hold.
No I didn't. I specifically asked for an FCC or FAA reg that SpaceX has broken. This is a very different thing.
> If you believe it's okay to lie in FCC filings so that you win awards and change policies, then that's a valid opinion to hold.
That's not an opinion I have, and I don't actually believe your claims about them lying are correct here (you seem to assume that FCC is composed of idiots), but regardless, it's completely irrelevant to the point. Even if they do that, my point that once FCC or FAA puts something official on paper, SpaceX treats the words as holy, remains.
Look back at the beginning of this thread. Someone made a statement that can be boiled down to "SpaceX ignores FCC regs". They do not do that. My entire, and only, point here was that they do not do that. You apparently seemed to take that as a "SpaceX is good" statement, and have argued against it with essentially entirely random "SpaceX is bad" statements. How the f is the failure rate of SpaceX satellites even remotely connected to anything here? Or did you just jump to it because you wanted to say something negative about SpaceX.
So I ask again: Do you know of a single case where SpaceX broke or ignored FCC regs? Not things where they did something you think as bad, but where there was a rule, and they broke it. That's a simple question.
Note that SpaceX asked for more, but FCC decided to only grant 10 satellites for now. So, SpaceX did as they always do and followed the order to the letter, and only deployed 10.
That requirement could have come from the government. It's not a crazy thing to ask for when you're launching literally hundreds of satellites into LEO.
Plus, the last thing SpaceX wants is a news story about killer satellites raining debris on innocent citizens. Remember that each satellite launched has to be deorbited eventually, so if they're launching dozens of sats per month they'll eventually be deorbiting dozens of sats per month. If they don't fully burn up each one will have a chance of hitting someone. Sure the chance may be very small, but when you're rolling the dice dozens or hundreds of times per month eventually you're going to land on snake eyes.
They are explicitly pursuing a demisable sat design.
Note - despite other posters claims, this is not a requirement and there are other approaches as well for the safety side here. The most common is to deorbit into the ocean (the sats are maneuverable) with a failure rate and part hazard rate low enough that remaining risk is minimized. I would expect they would deorbit into ocean / near non-populated areas for other reasons as well.
For others like me who are wondering how much bandwidth could be realistically achieved between sattelites, here is one real-world product from the company “Tesat”:
> The laser terminals supplied by Tesat needed less than 25 seconds on average to lock onto each other and begin transmission in both directions at 5.6 Gbit/s.
Napkin math: that’s 56 clients using constant 100 MBit/sec. With overbooking coefficient of 0.01, that’s 5,600 clients on 100 MBit/sec plan per sattelite with 1 MBit/sec guranteed.
To be clear, that refers to a demo system unrelated to Starlink. I don't think it is known where the Starlink lasers come from or what their capabilities are.
All I know about them is that a reason they weren't deployed in the first batches of satellites is their original design included parts that did not burn up completely on reentry, and SpaceX wants Starlink satellites to always burn up completely, since there are thousands of them.
...however, this is nowhere near any fundamental limit on optical transmission. SpaceX is operating around 20-40GHz radios, and let’s just say a bandwidth 1% of the frequency with a SNR of 1, giving them a bit rate per channel of 200-400Megabits/s. The same calculation with near IR optical (which is optimistic obviously) gives 3 Petahertz frequency, 30THz bandwidth, and 30terabits/s/channel. So optical is nowhere near any kind of fundamental limit, unlike radio which needs many channels (ie phased array, MIMO, spatial multiplexing generally, etc) to saturate the bandwidth of the satellite bus.
(Also, note that broadband users have a capacity factor of like 1-2%, so on average 20Gbps can give service to like 10,000 subscribers per viewable satellite using 100Mbps peak service.)
So you are correct, of course. I meant to use 1000nm, not 100nm. However, the thing about being in space is you actually COULD use vacuum UV (including 100nm) which doesn’t propagate in fiber and is even blocked by air. Not only does that have the bandwidth advantage but also allows even smaller apertures (or greater gain for the same aperture). Of course, vacuum UV transceivers aren’t really available (you can get coherent UV lasers down to 126nm) but in the long term it is possible.
I recently read the book Eccentric Orbits about the Iridium comms system that was a lot like this (recommended by the way). One of the technical issues that was mentioned in it was that the satellites had to correct for the Doppler shift in the frequencies they used to talk to each other because of the high relative speed they were traveling at. I wonder if these satellites have a similar problem or if lasers are so directional they can afford to not be too discriminate in what frequencies they accept.
Generally optical comms systems in space are not affected by Doppler shift because of their modulation scheme, not the directionality of the beams. They tend to use some type on-off keying (sometimes called OOK) where data is encoded in the pattern or duration of fully on and fully off laser pulses. If there's a variation in the frequency of laser light being received, it'll still just register an on or off signal on the photodetector on the receiving spacecraft. Radios, on the other hand, use modulations that rely much more on the actual frequencies and phases of the radio signals they receive, making frequency shift due to Doppler a much bigger factor.
The radio encoding schemes are more efficient in a bits per Hz-of-EM-spectrum sense, but optical systems have the advantage of being at such a high frequencies that even "inefficient" coding schemes can reach very useful datarates.
Forward and back looking lasers would have minimal speed difference, side laser links don't appear to be at extreme speed differences according to these hypothesized connections:
https://youtu.be/QEIUdMiColU?t=132
Does anyone have a good physical intuitive explanation for why laser and radar data bandwidth to space are such huge pipes?
I have always been amazed at how satellites quote bandwidths on the order of Tb/s, when I don't fundamentally see how they're so different from gigabit fiber, copper, etc. You've still got some oscillation going on and a receiver that has to "decode" it just the same.
Is there a physically intuitive way to understand why the bandwidth of these methods is so great?
Shannon limit says nothing about frequency. It's about bandwidth. You can have just as much sodium available at lower frequencies than higher, and in fact, it's usually preferable.
The point being that people are discussing very high microwave frequencies Here. There is still a ton of spectrum available around 10-25GHz. You don't need 40+ to get that.
I don't think it is quite that high. Maybe on the order of 20 Gb/s.
> Each satellite in the SpaceX System provides aggregate downlink capacity to users ranging from 17 to 23 Gbps, depending on the gain of the user terminal involved. Assuming an average of 20 Gbps, the 1600 satellites in the Initial Deployment would have a total aggregate capacity of 32 Tbps.
They have a couple of GHz of bandwidth on the satellites for each of the user downlinks/ground station uplinks, and maybe a GHz for the user uplinks/ground station downlinks, operating in the Ku (12-18 GHz) and Ku (27-40 GHz) bands (requested frequencies on page 8 of the linked paper.)
It's usually a straight dead shot. With optical fiber and any land based radio, there is always obstructions to bend around, physical loses, ground clutter. Basically signal to noise is way higher with line of sight.
Higher SNR means lower bit error rate means deeper modulation (cutting edge is like 65536-QAM and maybe higher). Noisy channel coding theorem and all that.
Computing produces a lot of waste heat which is difficult to deal with in space. The cooling system of the ISS can radiate a total of 100kW for example.
I think it is unlikely that we will see data centers in space unless it is absolutely unavoidable, e.g. because of lag between Earth and the place (in space) where the data is needed.
Keep the stateless, replicated, CDN content all on the ground.
Put the global (no longer replicated) datastore in space, and let me run some simple Lambda functions. Low latency, atomic transactions from anywhere in the world! Yes please!
That could be solved by 'spray cooling', where you have a liquid cooling medium which is sprayed out of some nozzles on a beam in one direction through open space for some distance, and then recaptured by some other beam/vane/paddle opposite the spraying nozzles.
Caches are to trade off latency and bandwidth for cost. It's cheaper to put normal servers in regional colos to serve a lot of local customers, probably not cheaper to put expensive servers in orbit to serve a varying and small set of customers below a satellite.
I can see running HSMs in space; probably the most cost-effective physical security available.
HSM in space is a compelling idea. You don't need a big satellite either. You really just need to keep a few hundred bytes of key material and then process crypto operations against it from the ground.
It would be very cool indeed to have crypto nodes (using Avalanche consensus perhaps?) running locally on all of these satellites. It shouldn't use that much extra power... you could probably do it at less than 50W per satellite. Right not you can run an Avalanche node on a Raspberry Pi 4 which uses 15 W max.
Even if satellites were free to launch and had unlimited bandwidth, that would be slower than installing servers where the ground stations plug in. So, uh, how much are people willing to pay for novelty?
There may be some ground stations owned by WISPs or whatever servicing a whole community, but I think the theory is that a lot will be small end user terminals providing connectivity to just a household or two in a remote location. If these are your users, then yeah, it could very well be advantageous to have and orbital CDN if millions of them are all streaming the same new Cobra Kai episode at the same time or whatever.
Oh, we're talking solely about serving the people on starlink? Not a general CDN? Yeah, that should have a small niche or two where you could make it better.
> There may be some ground stations owned by WISPs or whatever servicing a whole community
I'm talking about the ground stations that provide internet access to the satellites.
I was thinking it would be more attractive to people who want to host content that is problematic inside of national borders. Basically treat space like international waters so you can host your Pirate Bay or 8Chan or whatever outside of national jurisdictions.
If Starlink was knowingly hosting dubious/illegal content in space, wouldn't the media companies simply make the case that each individual jurisdiction revoke their license to the radio spectrum?
That would be a tough case to make I think. Spectrum allocations are hard to change. I can't think of any case where they were threatened as part of a legal dispute.
You asserted they had been profitable since 2008, and this article was almost a decade later (and forward-looking). They are not profitable (enough) with launches.
They are charging more than it costs to launch, but remember they employee thousands of extremely expensive engineers all year, and unless there are new businesses, that business isn't sustainable: https://www.wsj.com/articles/exclusive-peek-at-spacex-data-s...
The OC claimed they did not have a profitable business model. They do have a profitable business model (in fact they have multiple), they have just not scaled it up enough while at the same time are pumping massive amounts of money into new projects / R&D, so they are not a profitable business. There is a difference between the two things. Space launch and satellite internet are both proven profitable business models where SpaceX is actively growing their marketshare. In fact I wouldn't be surprised if in 2 years time SpaceX is the satellite internet market. No one will want the subpar internet experience provided by geosync satellites when Starlink is fully operational.
Also worth noting that the article is about data from 2015, which was the first year they had ever landed an orbital-class booster (Falcon 9), and only the one time (in December 2015). The first time they ever re-used a booster for a customer payload was in 2017 (after the WSJ article you cite was published). Since then they've re-used boosters many times, with their recent launch actually being the 8th launch+recovery of the same booster. So what was iffy/nascent in 2015/2017 is now proven/consistent in 2021. They are doing much, much better from when that article as published.
Sure, StarLink may be profitable, especially if they continue to get RDOF funding from the government. I'm skeptical it would be profitable otherwise given the cost of the terminal. I don't believe their launch business will be profitable since the number of launches has been dropping every year, and they are becoming their main customer.
Sorry, I significantly edited my comment before seeing your reply.
Also are we looking at the same chart? Not sure how you can make the claim that the number of launches has been dropping every year... every year since when? 2020 was the most launches ever, and in 2021 they are expected to do more commercial customer launches than all launches in 2020 + Starlink launches on top of that.
and remove all launches where SpaceX was the customer (a net loss unless the payload they launch has payoff), and specifically look at the GTO launches (the ones that pay the most money). There are fewer and fewer GTO launches, and the number of customers has been going down. A cursory glance shows 2017 had 8, 8 in 2018, 4 in 2019, 2 in 2020, and about 8 planned in 2021 (we'll see, these typically slip quite a bit. The point being that they need profitable launches (not a small satellite they rideshare with 10 starlinks on) to sustain a launch business with that many engineers on it. I think 2021 will launch maybe half the GTO/GEO they're targeting this year due to delays, as was the case in 2020.
I mean, clearly the payloads they're launching themselves will have a payoff, that's why they're launching them?
Regardless, commercial launches is what I was talking about as well. Do you have a reference for the idea that GTO launches are the only ones which are profitable / significantly more profitable than other launches? I haven't seen that. Because otherwise it seems pretty out there to only look at a small subset of their commercial launches.
And I'm assuming they'll do more GTO launches once Starship + Superheavy is online.
What do you mean by profitable? You seem to be using the definition of cash-flow-positive, or something like that.
Taking a loan, or investment, does not suddenly make a company unprofitable, that would be a silly way of looking at the world.
I don't have any figures for SpaceX, but I suspect the amount of cash they've be able to raise relates directly to the expected future value of the company. That they're able to launch 60 satellites at once for a fraction of the cost of other launch companies probably helps.
Part of the reason I asked what you meant by profitable is that you seem to be conflating operating and capital expenses.
From the perspective of the 'launch-arm' of SpaceX, launch costs are operational expenses and profitability analysis will typically balance that against payment for the service provided.
From the perspective of the 'starlink-arm' of SpaceX, launch costs are capital expenses and should be accounted for by depreciating the cost over the life of the asset (the satellites).
If you want to look at the profitability across the entire business then you have to be very specific about what you're measuring, and clear what you mean when you say profitable - at least if you want your analysis to be useful.
> In the U.S., the term accredited investor is used by the Securities and Exchange Commission (SEC) under Regulation D to refer to investors who are financially sophisticated and have a reduced need for the protection provided by regulatory disclosure filings.
In a pre-IPO company, VCs and other investment groups consider unaccredited investors to be a liability, and they'll either give worse terms or no terms. If you're worth a million bucks the SEC considers you to be a grown-up and that you'll ask questions rather than being spoon-fed data that can help you protect your investment, which takes away a bunch of scenarios where you can litigate.
I worked at a company that turned out to have an unaccredited investor. I didn't hear the details but they had to 'fix it' before the VCs would move forward with discussions. (They still didn't get the money.)
GP's implication is that if you have a million bucks you can get (are?) accredited, which will give you access to private shares. I've heard this too, but I couldn't tell you the details.
I am an accredited investor and make (small) investments on a number of platforms (ex FundersClub, AngelList), but have never seen healthy late-stage pre-IPO companies like SpaceX on any of them.
I wonder if they will eventually offer network access from orbit. Ship a turnkey module with a 10G ethernet port out the back...that'd have to be worth a couple million dollars a year.
Starlink's too low to offer reasonable coverage for orbital stations. Even payloads lower than the constellation (and there aren't many) would only see very small beams, and would pass through them quickly. And the laser system, I think you'd basically have to be co-orbital with them to work.
That would be a better product for the traditional GEO comsat operators, but apparently there's not enough market there for them to bother. NASA already has a system for that, TDRS. I wonder how busy it is nowadays?
> NASA already has a system for that, TDRS. I wonder how busy it is nowadays?
last time i had anything to do with TDRSS (circa 2013), it was extremely busy, expensive, and it was the military that got the high bandwidth links. we could only use it to communicate with our research spacecraft during early orbit activities, or emergencies. and we got something like 2400 to 9600bps.
At South Pole we use TDRSS for uploading science data. The upload speed is actually very good - easily our highest bandwidth connection because we have to get hundreds of gigabytes of data a week to the North (mostly South Pole Telescope and IceCube). We also get some "public" capacity for the station which is about 5 mbps while it's up.
Competition for use is still strong. We tend to get a few hour-ish chunks every day and it varies a lot.
(EDIT: Just realiesd if I'd waited a few hours to post, then I could say that this came via TDRSS ;)
NASA runs satellites for this purpose, TDRS, in GEO. But they're for NASA use only, so SpaceX only gets them on Dragon flights (and maybe other NASA payloads.)
Does anyone have any information in what exactly goes into "refurbishing" the satellites which land? What exactly do they do to the rockets to get them mission-ready again?
I assume you meant rockets in your first sentence, as most satellites (other than spacecraft) don't survive landing :)
SpaceX/Elon hasn't gone into a huge amount of detail about what's involved in refurbishing the first stage. I imagine a lot of it comes down to inspecting parts and making sure they haven't been damaged or worn out, and replacing the ones that have. Maybe cleaning out soot from the engines. The stated goal is to not have to do any inspection or refurbishment between flights, but they aren't there yet.
Satellites don't land. Boosters do. They have lots of very high-spec mechanical parts--turbines and valves exposed to high-temperature, oscillating high-pressure, often oxidizing fluids--that must be inspected for damage and wear, so as not to cause explosion on the next flight. In addition, high-stress non-moving parts, e.g. nozzle neck, can erode under high-temperature matter flows.
Exploding rockets ruin their day, disappoint customers, and endanger any people onboard.
Yes, the title implies there is a single polar orbit when polar orbit could be one of many orbits with particular characteristics (passes over/near the poles). It's awkwardly worded
I don't agree that it is a great show if you enjoy physics. In fact, I'd say the show does a headfake towards physics which leads people to think it is reasonable.
It is "good physics" the same way _The Martian_ is good physics. Neither is actually close....but they're far better than you see in Battlestar Galactica.
It's all relative, I mean we're using a benchmark of all previous television scifi having "magical antigravity technology where every ship has standard gravity pulling everything to the interior decks"... It's not a really high benchmark to meet to have at least a LITTLE BIT of plausible science, like spaceships oriented like flying office blocks with internal ladders/elevators, and thrust gravity.
I don't remember The Martian movie having many examples of bad physics (other than a "severe windstorm" on a planet with very thin atmosphere; a mistake carried over from the book). It just didn't mention a lot of the science that made the book interesting.
I don't remember too many glaring examples from The Expanse either. They generally have artificial gravity when under thrust and not when they don't. When slowing down, they generally have the engines pointed towards the thing they're approaching. The characters in high-G maneuvers don't look the way people would actually look in those situations, but that's hard to fake without actually sticking actors in a centrifuge so I'm willing to give them a pass.
Scientific accuracy isn't really the focus of the show, though. It's a show set in space that needs reasonably accurate physics in order to have a plausible setting and not detract from the the story and the characters.
I'm gonna have to disagree here. Sometime back in season 2 or 3 they had a shot where a rocket engine's flame cone passed through a girder and it lit up due to the change in temperature. I mean, that's borderline nerd pornography.
The title seems a bit sensational for HN: Space, lasers, stars, orbit. I know the drama is partly embedded in company and product names, but how about:
But it's new. These have lasers that create a new method of operating: they don't need to directly reach a ground station, they can send the data to another satellite (or a chain) first... eventually enabling data to go around the world mostly in the vacuum of space (for speed gain).
The title is not any more sensational than the subject is. Inter-satellite laser communications at scale is the entire selling point of Starlink. It's a major tech milestone, the success or failure of this system massively depends on it.
The value that sat-to-sat laser links provide is that they create a low latency, high bandwidth path that stays within the Starlink satellite network. Before these 10 satellites, each Starlink satellite has only been capable of communicating directly to ground terminals (either consumer, transit, or SpaceX control). For traffic that is intended to move large geographic distances (think transcontinental), this can require several hops back and forth between ground and space, or the traffic from the user terminal is exited at a node that is geared for transiting traffic and most of the data transits along existing ground Internet links.
By performing this type of transit directly in space, and exiting at a transit node nearest the destination for the data, you greatly reduce latency. Bandwidth still might not be great, but what this does is unlocks a very financially lucrative consumer use case: low latency finance traffic and critical communications. There are many use cases around the world where shaving even 10-20 milliseconds of latency on a data path can unlock finance and emergency capabilities, and this is a long fought battle throughout the history of these industries. As an example, if you got a piece of news about a company in Australia, and wanted to trade on it as quickly as possible in USA, if you can beat your competitors by 10-20 milliseconds, that can mean a lot of money.
Laser comms for Starlink sats have long been planned, but have historically proven to be quite hard to get working. They also depend on a sufficient critical mass of satellites so that a given sat actually does have another sat within lock to send the traffic towards.