Hacker News new | past | comments | ask | show | jobs | submit login
Application for Fixed Satellite Service by SpaceX (fcc.report)
91 points by lgats on Nov 8, 2018 | hide | past | favorite | 89 comments



The application lists the following benefits

- Rapid, passive disposal in the unlikely event of a failed spacecraft

- Self-cleaning debris environment in general

- Reduced fuel requirements and thruster wear

- Benign ionizing radiation environment

- Fewer NGSO operators affected by the SpaceX constellation

The first two are because there is more atmospheric drag. I believe that orbital debris in the case of a collision was something that SpaceX was struggling to mitigate (every struggles, but no one has put up this many satellites before).

The third is because originally the plan was to launch to a 400km orbit and then have the satellites lift themselves to a 1150km orbit. Now they intend to launch to a 300-350km orbit and lift themselves to 550km. They expect that the smaller amount of lifting will increase satellite lifetime by 50% even after accounting for atmospheric drag.

The fourth is apparently just "there's less radiation lower, and radiation is bad for electronics".

The fifth is just "less of the other theoretical internet constellations are at this height" as far as I can tell.

(All information sourced from the technical information attachment)


> They expect that the smaller amount of lifting will increase satellite lifetime by 50% even after accounting for atmospheric drag.

I'm curious: why will this provide a longer life? Is it the lift burn itself that affects the lifetime (so reduced lift burn is better for the satellite)?


You can use fuel for station-keeping instead of orbit lifting.


Wouldn't burning all station-keeping fuel immediately be better than riding through thicker atmosphere for a while, and burning fuel bit by bit?


no, because a modern satellite with ion thruster or very high efficiency (high specific impulse, low thrust measurement in newtons), once it's been ejected from the second stage of a rocket and is above 99.99% of the atmosphere, if you burned all of its stored xenon fuel immediately after launch, would end up in a 45,000 x 450 km elliptical orbit.

If you want to keep a satellite in a mostly circular orbit from 350x350km to 600x600 km you do periodic very small boost maneuvers.


On an unfortunately very theoretical (but apparently already patend-encumbered, from what I can glean from Wikipedia) level there is also the concept of air-breathing electrical propulsion, an ion engine replenished from the the same trace atmosphere that causes the drag the engine is supposed to counter. Basically a solid state propeller that can still work in very thin air.

From my layman's understanding, because the effect of aerodynamic shaping breaks down at very low pressure, exhaust speed would have to be travel speed (TAS) x (total cross section / intake cross section) to keep orbit. Assessing whether that puts the concept in the realm of feasible technology or not is beyond my skills, but at least there seem to be projects working on that question. If it does work out, it would completely change the economics of LEO use.


I'm having a hard time finding good atmospheric density charts that go all the way to orbital altitudes, but this one looks good: http://hildaandtrojanasteroids.net/Atmosphere_model.png

Basically, there's already nothing left at their new planned altitude. In either scenario they'd need to burn fuel in order to compensate for irregularities in Earth's gravitational field, but I assume they've run the numbers internally.


Station keeping doesn't only involve drag; it involves rebalancing the orbital plane when a satellite fails; you also need to provision fuel to deorbit.

The higher the orbit, the more fuel you need to deorbit.


Ah! Thanks.


Curious that they don't mention lower communications latency as a benefit.


Perhaps because it's only 2ms per direction. I assume that someone that sensitive to latency wouldn't use satellite communications in the first place.


This constellation should have lower latency over long distance than is even theoretically possible using terrestrial fiber, because the speed of light in space is substantially better than the speed of light in fiber. [0]

Even over short distance the latency (which the "Legal Narrative" pdf quotes as 15ms) is negligible for almost all applications.

[0] paper http://nrg.cs.ucl.ac.uk/mjh/starlink-draft.pdf / video https://www.youtube.com/watch?v=AdKNCBrkZQ4


> than is even theoretically possible using terrestrial fiber,

If we're talking about theoretical limitations then hollow core waveguide fibers should get you close to vacuum speed.


They did elsewhere, just not in their list of benefits. Not sure why it wasn't included there.


This animation based on this proposal is good, https://m.youtube.com/watch?feature=youtu.be&v=AdKNCBrkZQ4


That animation was great. Just what I needed to wrap my brain around this.

But 4,400+ satellites! How many launches is that going to take?

Also, glad to see that Alaska doesn't get shafted in this. As the computer voice noted, it's an FCC requirement.


good question so I found this[1] faq which says:

  Using a Falcon 9 at 25 satellites per launch it would take 177 flights, about 36 flights per year.
  Using a Falcon Heavy with 40 satellites it would take 112 flights, over 5 years that's about 22 flights per year.
  Using a BFR assuming 350 satellites per launch, until someone comes up with a better number, would need 13 flights total.
now those are based on the old higher orbit so presumably the numbers move substantially with this new plan.

[1]https://www.reddit.com/r/Starlink/comments/7zqm2c/starlink_f...


Ehh, give ‘em a few more years, they can do it with one rocket over the course of a few days.

Only a little </s>.

Imagine a business plan for 4,400 satellites being sane. What a world.


Not only that, but the companies first successful orbital launch was only 10 years ago!

In those 10 years they went from being literally laughed out of rooms, to being the forefront of the industry. And even now that they are arguably "on top" in many ways, they keep on trying to do these insane things.

They are currently trying to build a rocket with the largest launch capacity in history, launch and maintain a satellite network which is the largest in history (if successful, it won't just be the largest, but will be at or near half of all satellites in orbit!), and still have a goal of getting humans to mars.

Say what you will about Elon Musk, the guy knows how to set goals.


> how many launches is that going to take?

In my view, that's kind of the point. SpaceX internet service might make money, or might not- let's presume it loses a little bit of money over time. That's fine, because it's true purpose is to reduce costs for SpaceX the rocket manufacturer and refurbisher.

Manufacturing facilities operate best when they have an even load. Scaling up and down, laying off then hiring, etc, is bad for business. By having this perfectly flexible customer, SpaceX can do a lot less of that. They can scale up at an even pace.

The end goal will be SpaceX launching every X days exactly, always with a mix of external and internal customers on those launches.


wait, how are the satellites orbiting in planes that don't pass thru the center of the earth?


I think it just looks that way from the way it's rendered: the continuous line at the northern limit isn't an orbit, but formed from the northernmost segment of many.


no, not that, but it looks like the satellites are orbiting in circles in higher latitude-planes.

if you follow the motion of the boxes.


I love this animation, although it may need to be slightly revised with this new FCC application.


Not sure where you got 116. Looks like the number is the same but they're relocating some.

"On March 29, 2018, the Commission authorized Space Exploration Holdings, LLC, a wholly owned subsidiary of Space Exploration Technologies Corp. (collectively, “SpaceX”), to construct, deploy, and operate a constellation of 4,425 non-geostationary orbit (“NGSO”)satellites using Ku- and Ka-band spectrum. With this application, SpaceX seeks to modify its license to reflect constellation design changes resulting from a rigorous, integrated, and iterative process that will accelerate the deployment of its satellites and services. Specifically, SpaceX proposes to relocate 1,584 satellites previously authorized to operate at an altitude of 1,150 km to an altitude of 550 km, and to make related changes to the operations of the satellites in this new lower shell of the constellation."


I think this quote better summarizes the change.

"Under the modification proposed herein, SpaceX would reduce the number of satellites and relocate the original shell of 1,600 satellites authorized to operate at 1,150 km to create a new lower shell of 1,584 satellites operating at 550 km"

This shell will also now use 24 orbital planes instead of an originally planned 32. (per a table in the technical information pdf).

The total number of satellites in the constellation goes from 4,425 to 4,409.


Still can't quite wrap my mind around the fact that the world's largest space/satellite programme is about to be run by an LLC.


We've reverted the title from the submitted “SpaceX Files to Introduce Starlink with just 116 Satellites 550 km”.


I wonder if there is any consideration for high quantities of LEO satellites affecting ground based telescope operations? I guess one question would be "how many LEO objects are up there that are the size of say more than 1 cubic meter in size right now?".. if SpaceX would be doubling the total number up there right now that does sound concerning.. I personally love the idea of the global internet constellation they are working on.. just worry that there could be other ramifications..


I think something that most people don’t comprehend is just how HUGE space is. For context, at 550km orbit, the “surface area” of the sphere surrounding the earth is ~601 BILLION square kilometers. Suppose we had 40,000 (double the number of objects larger than a softball we have been tracking) objects up there, in that exact orbit, we would still only have one object every 15 MILLION square miles.

So, while it is a valid concern... until we put up “millions” of items, I think astronomers will be pretty safe. However, orbital debris avoidance... much bigger issue.


I agree with your conclusions. Just a nit: the r² increase in surface of a sphere that makes this so much bigger an area to scatter the satellites across also applies to the field of view of telescopes and thus cancels out.

Edit: Clarification. The above is only true if having a satellite in the field of view is enough to be a problem.


In my opinion even if this was a huge problem for ground base telescopes it wouldn't matter, that negative externality would be tiny compared to the potential benefits.

That said, this project will approximately double the number of man made satellites. I believe these satellites will be smaller than normal, but also closer than normal.

I'm not an expert, but I think number of satellites is actually a really bad predictor for impact. In particular there was one set of satellites that had really bad effects for some reason. See more here [0].

[0] https://en.wikipedia.org/wiki/Satellite_flare


Yea I wonder if with modern telescopes that they do digital integration so effectively they just delete the errant frames from the recorded captures and carry on from there. In the old days with film capture the entire exposure would be toast if a satellite flew through the frame..


Are SpaceX going to put a bunch of their previous satellite operator customers out of business?


Not just that, they will even compete on latency with transoceanic fiber operators.


Also bandwidth. Because there is little other use for the satellites over the pacific, the instant the first phase of the constellation is up, it is better than the Southern Cross fiber line at everything it does.


I'm a bit skeptical about that. Fiber has mind-boggling bandwidth scaling compared to beaming radio signals through a hundred kilometers of atmosphere. Light has a higher bandwidth and you can put more than one fiber into a cable and there's less interference too.


Their satellites don't communicate with each other by radio links, but by laser. Laser works better in vacuum than it does in fiber. Radio is just for the downlinks.(1)

They have not publicly stated anything about the inter-satellite links, but a research paper published by independent researchers(2) estimated that based on other state of the art, they get >100Gbps of bandwidth between two communicating satellites. The total trans-pacific capacity between NZ and the US will be better than the SC fiber line because there will be many different non-sharing paths using different satellites.

(1): Also likely for crossing connections. The paper mentioned below assumed use of laser for those too, but that is IMHO unlikely because steering would be too hard. Lasers work great for satellites on the same plane and those that are on the neighboring ones, because the angular speed that the system needs to move to track stays very low -- the satellites near them are almost stationary from their point of view. In contrast, satellites on crossing planes zip by very fast and have high angular motion, especially when near.

(2): http://nrg.cs.ucl.ac.uk/mjh/starlink-draft.pdf


The up/downlink is the bottleneck if you want to provide backbone-like connectivity over satellites, and that's still done via radio.

Those speeds may be great for individual endusers, but if you're a datacenter which needs a lot of bandwidth then starlink would be limited by the shared radio bandwidth to the sats, which could be quite crowded in an urban environment, while fiber wouldn't be. That's why I only see them as competing on latency, not on bandwidth.


10.7GHz... right on the edge of a radio astronomy protected band. Having a downlink here is not very good...


how can they prevent these satellites being hacked? If they were they would be hard to recover I’d guess?


How do you prevent any communication satellite from being hacked? What makes you think that those satellites are easier to hack than others?


Do you know why are they doing this? Lower orbit is probably cheaper?


Low orbit is cheaper to get a satellite to just considering launch costs (less fuel, easier to recover the first stage), but as siblings have mentioned, it's more expensive overall, particularly considering that you need lots more satellites to provide coverage.

Their plan to launch O(1000) satellites is to get lower latency and higher bandwidth, which would render the current generation of satellite internet obsolete.

It's a great example of the sort of business plan that's only possible with cheap launches that SpaceX's reusable rockets have provided.

Here's a more detailed primer:

https://arstechnica.com/information-technology/2016/11/space...


No, it would not render the current generation obsolete. Not only that, but they are years away from having a fleet up there, and a decade from having the full fleet. Satellite technology progresses a lot in a decade, so you're doing the equivalent of comparing the iPhone 12 to the Galaxy s2.


> No, it would not render the current generation obsolete.

Why not? Just saying "you are wrong" doesn't really add much to the conversation. I'm interested to know more.


You're right, I should have explained better. If you take a look at many of the other satellites announced that don't get the same hype as SpaceX, you'll notice that they're comparable capacity, or have lower advertised capacity with other tradeoffs. SES mPOWER, viasat-3, and Jupiter 3 to name a few. The companies that stick to GEO satellites maintain that LEO satellites waste a lot of capacity over water, and there's nothing you can do about that. Not only water, but underserved areas that can't afford satellite technology are also included in this. Depending on the numbers you look at, this could mean the effective capacity of the satellite is ~10% of what is advertised.

The other issue with LEO is that if you want to double your satellite capacity, you need to launch twice as many than you currently have in the sky, instead of just one or two more large ones. This presents logistical problems, and technical as well to some extent.

When people typically refer to GEO satellites, they're unfairly making the assumption that the old crop of GEO satellites are where technology is today; namely fixed, low-capacity satellites. This is not the case. With the HTS (High-throughput satellites) and XTS (Extreme throughput satellites) that have movable capacity, not only do you have a comparable amount of usable capacity to LEO, but you can also move it as business needs change. The latency issue will never change, but if you see my other comments, I'm skeptical they'll be able to achieve the latency everyone is quoting.


Very interesting, thanks for sharing.


What is O(1000) satellites?


It's an unnecessary misuse of the Big O notation used to establish bounds of functions. I think they just meant up to 1000 satellites


The author means literally 1000 satellites.


It's borrowed from comp sci, but you can read O(x) as "on the order of x", so maybe 900, maybe 1200, but somewhere around there.


In Comp Sci O() notation has a very specific meaning and "on the order of" does not approximate it. I think it was probably just a misuse of the notation.


In informal contexts it's also used as a fudge/handwave, purely as a questionable analogy that nobody is expected to take too seriously. `theptip` almost certainly knows that O(1000)=O(1), they were just being playful.


Exactly.


I have seen it getting used in many places and it's understood as order of. O(n) means at worst case number of steps needed to complete an algorithm would be c.n + a. So it is completely valid use since O(1000) would mean it can be c.1000 + a, which actually means order of 1000s.


This is also an inaccurate description of the property O().



https://en.wikipedia.org/wiki/Big_O_notation

> Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.

I.e., it's an asymptotic upper bound.

It's also interesting to compare with Ω() (Big Omega ­— asymptotic lower bound) and Θ() (Big Theta) (big-O AND big-Omega): https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachm...

A good textbook on this subject is Introduction to Algorithms: https://www.amazon.com/dp/0262033844/ .


What I described is the same thing in layman's term. Worst case is the colloquial word for upper bound. And in that example n was approaching 1000.

If you want to be puritan the only fault I see in my definition is instead of using a generic function I assumed it's linear function - but that's for explaining the colloquial use.


It's just an asymptotic upper bound (though often used to imply tight bounds).

It's most commonly applied to worst case running time, but is often applied to expected running time ("hash table insertions run in O(1)"), space complexity, communication overhead, numerical accuracy and any number of other metrics.


Yep. Often people mean the narrower big-theta when they express complexity in terms of big-O.


"The letter O is used because the growth rate of a function is also referred to as the order of the function"

order of...order of

yeah I know there is no "growth"


You've taken an informal and sloppy summary of big-O notation, isolated a single English expression, and construed that a similar English expression used in a different context with entirely different meaning is an accurate approximation of the formal meaning. You would be incorrect.


I guarantee you can find dozens of examples of CS-type people using O() in non-algorithmic-complexity topics as "on the order of" or "approximately" right here on HN within the past 6 months. They aren't all misusing that notation - it is being co-opted into a more general lexicon, whether you like it or not.

And I wasn't saying that "on the order of" is an approximation of what O() actually is in CS, merely that that is how OP used it.


Geosynchronous satellite latency in best case: 250ms

LEO latency best case: 6ms

I've used those Hughes satellite connections before, I never got anything close to 250ms. More like 400ms.


The absolute minimum latency you'll ever see with geostationary is about 489ms. That's assuming 1:1 dedicated transponder capacity and something like a higher end SCPC terminal and modem, accounting for latency and modulation/coding/FEC by the satellite modems on both ends.

Consumer grade hughesnet stuff will vary anywhere from 495ms in the middle of the night to 1100ms+ during peak periods due to oversubscription.


This should probably tell you something about spacex's claims as well. The actual latency is never just the slant range. There's a ton of processing and network switching too.


I am pretty optimistic about SpaceX's claims for what the space-segment latency will be. If you look at the system architecture for current generation high-bandwidth Ka-band geostationary services, which has dozens of spot beams on North America, there's about 30 teleports spread out around the US and Canada. These allow Viasat and Hughesnet customers, and similar, to consume capacity in the same spot beam as the teleport they're uplinked from (vs the satellite cross-linking a set of kHz from one Ka-band spot beam to another). For example, customers in really rural areas of Wyoming are going to connect to a teleport that's in Cheyenne, which will usually be in the same spot beam. Sites in Cheyenne near the railway have really good terrestrial fiber capacity for an earth station operator to buy N x 10 or 100GbE L2 transport links to the nearest major city.

It would be technically possible, but uneconomical and an inefficient use of space segment transponder kHz to have customers in Wyoming moving traffic through a teleport in the Chicago area. Here's an illustration of Ka-band spot beams on a typical state of the art geostationary satellite:

http://www.southwestsatelliteinternet.com/images/Ka-band-spo...

Applying the same concept to starlink, telesat's proposed system, and oneweb, if they build a number of teleports geographically distributed near rural areas, it will allow individual satellites to serve as bent-pipe architecture from CPE --> Teleport, within the same moving LEO spot beams, or to have customer traffic take only one hop through space to an adjacent satellite before it hits the trunk link to an earth station. For example customers in a really rural area of north Idaho along US95 might "see" a set of moving satellites that also have visibility to an earth station in Lewiston, ID, where carrier grade terrestrial fiber links are available. Or a customer in a remote mountainous area of eastern Oregon may uplink/downlink through a teleport in Bend.

The ultimate capacity of the system will be determined by how few hops through space they can get the traffic to do. Since every satellite will be identical and capable of forming a trunk link to a starlink-operated earth station, when it's overhead of it, they have an incentive to build a large number of earth stations geographically distributed around the world.

It's basically the same idea as o3b's architecture but at a much smaller scale.


I don't doubt the latency in space numbers. What I don't believe is using a theoretical distance to compute latency. As you said, a LOT of that latency can come from scheduling inefficiencies and congestion. Each of their satellites has a relatively small amount of bandwidth, so if you happen to be in a beam with a lot of people, you'll be hit hard by this. As far as I know, their satellites are not capable of steering beams, and rely purely on the placement directly down from where they are.

Another consideration: adding another 50ms to GEO latency isn't really going to change anyone's opinion. It's still targeted towards streaming, and latency doesn't matter as much since they're not targeting real-time gamers. SpaceX needs the latency to be very low to hit that market. There's a world of difference going from a 30ms ping to an 80ms ping, and once you're past a certain point, it puts you in the same camp as GEO.


> As far as I know, their satellites are not capable of steering beams, and rely purely on the placement directly down from where they are

This is wrong. From their FCC filing(1), they use AESA phased array antennaes, and each satellite is capable of simultaneously maintaining "many" (unspecified) steered beams that are <2.5 degree wide.

Also, the receiver is capable of distinguishing between multiple beams covering it so long as there is more than 10 degrees of angular separation between them from it's point of view. If I understood it correctly, this will allow nearly every visible satellite at the same orbital height (less the ones very nearest to horizon) to communicate with targets that are geographically very near to each other at full bandwidth. After the very first phase has been launched, they can provide a total of ~500 Gbps of downlink bandwidth to any spot target that lies between 40 and 60 degrees latitude. The later additions at high orbits help with total capacity and especially with targeting multiple targets relatively near each other, but do not help provide more bandwidth per city, as that is limited by the 10 degree angular separation requirement.

The VLEO (330km-ish) constellation will help with that by reducing the size of each spot.

(1): https://cdn3.vox-cdn.com/uploads/chorus_asset/file/8174403/S...


one noteworthy item from the filing is that they intend to build 200 Gateway earth stations just within the continental United States, which means that the vast majority of satellites will be functioning as Bent pipe repeaters. I don't think that there will be a lot of traffic traveling satellite to satellite in a multiple hop arrangement. 200 sites for their ka band trunk links from satellite to earth station means that a CPE terminal in, for example rural NW Montana might have a 25-30ms latency to a gateway in Spokane, and from there the latency to internet destinations will be all fiber based, same as any existing ISP.

if I had to guess on the earth station siting, they are picking locations which are medium-sized cities with decent terrestrial fiber connectivity, which will be within the same satellite view footprint as adjacent rural areas. Such as an earth station in Boise may serve mountainous remote areas of ID.

This 200 Earth station figure also lends me to believe that the first manufacturing run of satellites may not have any satellite to satellite trunk link ability at all, but that they will ALL be bent pipe architecture. this means that if SpaceX wants to serve a particular area, they need to have an earth station on terrestrial fiber in the same region, which is simultaneously visible to satellites and end users.


I think that's a very good observation, especially given the recent news that as part of musk firing some of the leadership on the project, he wants the satellites to be significantly simpler.


If that's the case then the problem becomes exponentially more complex than I was thinking, and the technical challenges are going to be far harder than I'd first thought. Doing frequency reuse and interference mitigation at the rates they need to if they're going to steer the beams is enormously complex.


I'm in agreement about the technical challenge - doing it with "low cost" phased array CPE is challenging. If I had to engineer it I'd design something with a pair of highly shielded, tight focal axis parabolic antennas (basically a miniaturized o3b terminal), like two 60cm size on two-axis tracking motorized mounts. But there's no way that sort of setup with a unique rf chain for each of two dishes would be under $5000.


One of my favorite industry analysts just wrote about this. You might find it interesting:

http://tmfassociates.com/blog/2018/11/09/the-new-new-space-t...


Even if it is 150-250ms to terrestrial internet connections, it will be a lot better than consumer grade geostationary. The unfortunate economics of launching 3500-6000kg things into geostationary orbit means that transponder capacity on current satellites used for consumer grade VSAT services are horribly oversubscribed. You're not going to get very good satellite service with the current cost structure and tech for $80 to $150/mo on a 3 year contract. One needs to look at figures like $400-500/mo, and a 1.8m size antenna for more complex modulation, before vsat access is really "good".

If the space segment only adds 120ms to what would be an otherwise same latency rtt ping, it's not so bad, people in the US have been spoiled by having CDNs very near all major ix points.


I disagree with that. I think there's a latency line that if you exceed that, certain functions are no longer possible. Real-timing gaming is one of those, and VoIP as well with a slightly higher latency. You are at a complete disadvantage playing real-time games if your ping is 150ms compared to someone else at 30ms, to the point where you may as well not play those types of games.

I'm not sure what the justification is to assume that Starlink will not be horribly oversubscribed, either. Last I checked it was supposed to be about 32Tbps with all satellites operational. A substantial amount of that is completely wasted over water, so the effective capacity for customer that actually have the money for SpaceX to generate revenue is very small. The types of services people need in remote areas, whether it be a plane or in a village that has never had internet are not those that require low latency. They are either streaming media (plane), or web browsing. In that sense, I don't see how Starlink has and advantage there.

I would be shocked if they could deliver something better than cable on DOCSIS 3 to even 10% of cable customers with comparable service. My guess is it will be tailored more to high-paying customers that happen to not be able to get decent cable.


Probably will be radically oversubscribed in order for the economics to work, and will be a shittier service than a properly implemented vdsl2, g.fast or docsis3/3.1 last mile (nevermind gigabit class gpon or active Ethernet ftth), but significantly better than small geostationary service vsat. And will be higher latency and with worse GB/mo bandwidth quotas compared to a modern technology WISP for last mile.

A lot of the technology press has misunderstood the most desirable applications and locations for it. People think that it's going to compete for a residential internet service in a suburb of a city like Portland, or Sacramento, or Denver. If you can get 300 megabit per second DOCSIS3 service in one of those locations, that would be drastically better. where it is going to be a game-changer is all of the locations that are right now dependent on highly oversubscribed geostationary small vsat services, and extremely rural areas where there isn't even a single last-mile terrestrial Wireless ISP. and for ships in the middle of the ocean, if the gigabyte per dollar cost is significantly less than inmarsat or other options.


I think I understand what you are saying, but I'm confused at this comment compared to your others. If you believe, as I do, that the user terminal is going to be extremely expensive, how will that compete with the current small vsat terminals?

I agree that the service could be better in theory, but at the same time, existing satellite internet service also could be better by taking on fewer customers and not being as congested. But that's a cost trade-off. And in this case, I believe SpaceX has a higher cost per customer to recoup, so it seems in their best interest to also be congested to increase revenue.


I think that there is a good chance the terminal will be expensive, but the cost hidden or eaten by spacex to gain market share. Building a phased array thing with sufficient gain that can track two LEO satellites can't be cheap. But maybe their terminal hardware engineers have come up with something truly revolutionary, and we will all be surprised. I am hoping but skeptical that it could have similar hardware costs to a viasat small ka-band vsat terminal, in the range of $800-900 for rooftop equipment + modem.


> Last I checked it was supposed to be about 32Tbps with all satellites operational.

It will be ~80Tbps after the first three phases (the LEO constellation), ~240Tbps after VLEO.

I agree with you that they probably cannot offer enough bandwidth to compete with residential internet in densely habited areas.(1) The system is really interesting in less densely inhabited places, and for backhaul. The complete system has more transcontinental bandwith between almost any two (distant enough) places than all submarine cables between them put together. This alone will likely pay for the whole system, with plenty to spare.

(1) With a few exceptions. After the full constellation is up, New Zealand will have ~30 times more downlink capacity than the entire bandwidth use of the country as of right now, and will also have tens of times more connecting capacity with North America and Australia than it currently has. But that requires a country of only 6 million in the starlink sweet spot that gets all of the bandwidth of all visible satellites to the east of it.


I agree. I think the capacity on paper is very impressive, and the trans-continental capacity will be useful. I'm just more skeptical that they'll find a market willing to pay the price it will cost easily in the residential market. Satellite internet is more expensive than cable, and I don't see any way that will change considering their hardware will be more expensive than the current satellite internet hardware (just based on phased array). 1Gbps cable isn't unheard of these days, but the question is whether there's really a market for it. At some point you are past a speed where it has any material effect on what you're doing, and SX needs a small amount of customers paying a large amount of money due to the equipment costs. What they can't have is lots of customers with 50Mbps plans that have identical equipment to someone paying 10x more for 1Gbps.


> LEO latency best case: 6ms

Light travels about 1800km in 6ms, but that's just one way. Straight up and straight down at 550km is 3.6ms.


light is slower in the atmosphere, did you adjust for that?


Speed of light in air is still ~0.9997c (and approaches 1 as altitude increases), so it's a much less noticeable difference than it is for copper or glass


Cheaper how? I would assume with a lower orbit they experience more drag and deorbit faster meaning they need to be replaced more often.

Maybe using the ones in lower orbit to cover more densely populated locations?


The original orbits would take ~100 years to decay, which is way longer than the life of the satellite. The new orbit is around 5 years (not taking into account maneuvers from the fuel on board). Makes a lot of sense, especially since it looks like they'll be iterating the design as they go (two that jumped out at me: initially not all satellites will be taking advantage of both Ku and Ka bands, and not all satellites will have phased array antennas)


That's really interesting, than you for sharing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: