Seems like a source that one should not generally take seriously (Tipranks) being syndicated by a source associated with an important name (Nasdaq) as part of a larger wave of articles about how some "pact", "treaty", or "contract" ended on a specific day.
Having done a little more digging, I could find lots and lots of pieces about this topic, but I could not even one coming from a recognizable journalistic source. (Far from the top of search results was a piece that seemed to have some value: it noted the wave of coverage of the purported end of this pact despite the fact that no such pact existed. It also identified a possible connection between pushing this story, which makes the dollar sound endangered, and attempting to promote cryptocurrency.)
Lots of HN commenters taking this at face value. To me, the overall situation looks like an object lesson in basic critical media literacy.
My understanding of the state of the art of inter-satellite optical links is that they have only been used between satellites that are basically in the same orbital plane and in more or less the same orbit. That is, the angle from one satellite to the other changes very very slowly, so that the optics don't have to do much tracking -- and consequently satellites can only form an optical link with other satellites that are ahead or behind themselves in ~ the same orbit.
Cross-plane optical links would have a trickier tracking problem.
While there's no explicit mention of same-plane vs cross-plane optical links, I assume that the first time people have a public cross-plane optical link, they will make a big deal out of it. :)
The article also mentions that SpaceX would need to do further study before using laser links between satellites and ground stations-- this kind of optical link would require both more angular tracking and probably atmospheric correction as well.
> “Another really fun fact is that we held a link all the way down to 122 kilometers while we were de-orbiting a satellite,” he said. “And we were able to downstream the video.”
> For the future, SpaceX plans on expanding its laser system so that it can be ported and installed on third-party satellites. The company has also explored beaming the satellite lasers directly to terminals on the Earth’s surface to deliver data.
The lasers aren't used for ground-to-satellite comms. While they refer to some of them maintaining a link through the atmosphere, the lasers are intended for satellite-to-satellite communication way above the atmosphere.
There are some wavelengths that maintain decent signal quality through cloud cover, and even rainstorms. I cannot find the paper right now, but iirc Tightbeam (formerly from the Google sharks with lasers team, now spun out as Aalyria), demonstrated space to ground comms in adverse weather with negligible packet loss and something like 40% reduced bandwidth.
The customer terminals will likely never connect through lasers (because a laser can only point in one direction at a time), but moving the ground station uplink to a laser link sounds very beneficial.
It would fall back to radio and/or other connections. The laser connection would probably be sold at a discount rate due to the variable level of service.
Take a look at the slides from the presentation, I think the geometry clearly shows cross-plane links in the mesh. Having worked on these types of systems, I've had more difficulty with the lookahead angles (rx from where the target was, tx to where it will be due to speed of light) than the tracking -- fine tracking performance was required for all modes, and it largely became a GNC and acquisition time issue (since they're ephemeral) for the cross-plane links.
In general, how is the initial alignment performed?
Is there rough pointing, followed by some rastering, until the sensor gets a hit? Maybe with some slight beam widening first? My assumption is that you would want exactly one laser, one sensor module, and probably a fixed lens on each? Is the sensor something like a 2x2 array, or pie with three pieces, to allow alignment? Or is it one big sensor that uses perturb and observe type approach to find the middle?
Also, is there anything special about the wavelengths selected? Are the lasers fit to one of the Fraunhofer lines? 760nm seems like a good choice?
Alas there is no 'in general'. Acquisition is often the secret sauce due to, among other challenges, the extremely tight alignment requirement -- thermal shifts, satellite wobbling, etc, are all critical to manage.
On wavelengths, if you're trying to hit 100gbit+, you're probably having to use coherent optics, and there aren't many technology options or wavelengths on the market.
You got it exactly right! I worked on a simulation model of the complete optical setup of a laser terminal with movable mirrors and all including the fricking servo motors and a simple orbital model for the relative satellite positions. Plus an interface to drop in the actual acquisition and tracking code used on the embedded control system. All of that just to be able to do reasonably realistic simulations for verification and tuning of the secret sauce.
The "routing in the mesh" slide? Definitely given where the satellites are in that picture some of the links would have to be cross-plane, it's just the whole thing looked so messy (even with it being geo-referenced on a globe) that I didn't know whether to consider it a "real routing example" vs a "notional routing example that we overlaid on the globe".
Sounds very cool that cross-plane links are doable, even if they have predictable complications compared to in-plane.
I would have thought that someone would make a big deal (have a press release, e.g.) out of successfully establishing cross-plane links, but maybe it just doesn't seem that impressive to people who already have good enough precise predictive ephemerides or satellite states to make those links in the first place.
Tracking is an issue, but doppler can also be a thing. At orbital speed (actually up to 2x orbital speeds) the doppler effect between two satellites can change the frequency enough to cause interference. Moving a scope to track a moving target is one problem, allowing the algorithms to adapt at the frequency shifts on the fly another.
Indeed Iridium had to deal with the same thing (or I guess, didn’t):
“ Cross-seam inter-satellite link hand-offs would have to happen very rapidly and cope with large Doppler shifts; therefore, Iridium supports inter-satellite links only between satellites orbiting in the same direction.”
There were some experiments with communicating over Iridium to small cube-like sats back in the day, but we couldn't make the system on a chip beefy enough to do the Doppler shift calculations on the fly and survive a launch; it was close though. I think its possible to do now.
In the context of the full article (https://en.wikipedia.org/wiki/Iridium_satellite_constellatio...), it's clear they're talking about the polar orbits used by the Iridium constellation, which have "seams" around the Atlantic and the Pacific as the "first" set of satellites passing north-to-south overlap with the "last" set of satellites coming back south-to-north on the other side of their orbits. So of the 6 orbital planes used by the Iridium satellites, each plane covers 1/12th of the globe for each "half" of its over-the-poles orbit. So there are two "seams" where handoff is not supported, one off the eastern seaboard and one roughly over Japan.
Ah I didn't realize they have all of their stats in polar orbits, that's interesting. Starlink is mostly equatorial afaik, the higher latitudes aren't very well covered.
The Iridium satellites are in what you might call "parallel" orbits, if you stretch the meaning of the word a little bit.
The wikipedia link above explains it well:
"""
Orbital velocity of the satellites is approximately 27,000 km/h (17,000 mph). Satellites communicate with neighboring satellites via Ka band inter-satellite links. Each satellite can have four inter-satellite links: one each to neighbors fore and aft in the same orbital plane, and one each to satellites in neighboring planes to either side. The satellites orbit from pole to same pole with an orbital period of roughly 100 minutes.[8] This design means that there is excellent satellite visibility and service coverage especially at the North and South poles. The over-the-pole orbital design produces "seams" where satellites in counter-rotating planes next to one another are traveling in opposite directions. Cross-seam inter-satellite link hand-offs would have to happen very rapidly and cope with large Doppler shifts; therefore, Iridium supports inter-satellite links only between satellites orbiting in the same direction.
"""
The 'seams' have interesting implications for latency when I was working on Global Data Broadcast.
Doppler is not a big problem with lasers because the carrier frequency is so much higher than RF that it doesn't matter; it's bang-bang AM modulated.
I'm assuming two things: That something like Manchester coding is being used so that some clock skew is tolerable, and that the laser carrier is not in fact being frequency or phase modulated. Last I checked FM and PM of optical frequencies was not yet practical outside of laboratories, but I'm happy to be corrected.
Nah, I once did a job for a guy and they did LEO-GEO distances alright iirc and LEO-Earth in the mid-end 2000s, which has to deal with some pretty high angular velocities, if not as potentially high as LEO-LEO when they don't happen to be relatively nicely aligned. (In case that sounds strange, the guy was one of the two owners of a small, very specialized company that in turn was subcontracted by a rather bigger company. These laser terminals were quite the beasts and not really cheap.)
Right. The Iridium network had communication between satellites in different orbital planes passing each other but that was a pretty unusual capability.
They do have counter rotating planes though, so there are places where two satellite tracks next to each other moving in opposite directions, and these pairs of satellites cannot use the cross plane communication mode.
Additionally, their inter satellite links use regular Ka band radio.
It doesn’t get into it too much on pages 14 and 15, but it indeed suggests that they probably exclusively use the “intra-orbital” links closer to the poles to get data to a satellite where the inter-orbital links are more practical.
I believe Iridium had way more downlinks than they used to pre-bankruptcy. I guess volume constraints were less of an issue, so ok to hop around more in space.
Apparently it only happens above/below 68 degrees latitude, so the next satellite with a working inter-orbital-plane connection is at most one hop ahead or behind.
I'll assume there is a lot of double/triple (or higher) accounting going on here as data is sent through multiple relay hops to get the intended target.
My biggest beef with JSX is that what I might call the "most natural way" and certainly the "most concise way" of writing certain somewhat-complex structures in JSX often ends up being a huge mess. E.g. an element is some JSX elements with some code embedded, and the embedded code is returning some other JSX elements that have yet more code embedded in them, and the easiest refactoring to make the whole thing less gross is extracting functions that don't really deserve a name of their own.
I'm going to read that gist ( https://gist.github.com/joepie91/bca2fda868c1e8b2c2caf76af7d... ), but what do you dislike or hate about ES modules? From my own experience, they are extremely frustrating when tooling doesn't work well or at all with them.
Given that I don't do JavaScript or front-end for work, I mostly run into these things in hobby programming. Given that this is programming for fun, I can voluntarily cut myself off from all the libraries that use other module systems. In this happy little bubble, over time, ES module support has gotten better and I've selected tools / found ways to use tools that work with modules and I rather like it.
Perhaps because I'm less invested in the tools, I evaluate the situation of "tool X doesn't support ES modules" more like "tool X isn't great" and less like "ES modules are bad".
Perhaps it all stems from being a person who genuinely likes JavaScript, has a high affinity for standards, and a relatively low opinion (yes, I'm a snob) of the Node ecosystem?
You can already do encapsulation without them by using TypeScript namespaces. It's sad that namespaces didn't become part of JavaScript.
I really dislike having to include every last identifier that exists in other files. Yes I know that IDEs will sometimes do this for you
I guess the main thing is that they result in your needing a bunch of additional infrastructure (webpack, rollup, bun.js, HMR solutions, etc) just to get your app to run, when TypeScript can already do all this.
> I really dislike having to include every last identifier that exists in other files.
How do you feel about esm's namespace import feature [1]? For example:
import * as React from 'react'
> I guess the main thing is that they result in your needing a bunch of additional infrastructure (webpack, rollup, bun.js, HMR solutions, etc) just to get your app to run, when TypeScript can already do all this.
You can obviously use es modules directly in the browser without additional infrastructure. Bundlers simply combine (bundle) all your dependencies into a single file to prevent multiple http requests (although this is less relevant today with HTTP2). HMR solves a completely different problem — replacing individual modules in the browser during development when they change instead of reloading the entire page. HMR isn't something that es modules necessitate. You're free to reload your entire page every time you make a change in development.
Off topic and unrelated to the discussion proper, but:
I really wish React had gone with "component factories" instead of `import ... from 'react'`.
This would not only allow for framework-agnostic components (so they could also work in Preact, Mithril, Inferno, etc, etc), it would also make the hooks implementation not dependent on global state.
I don't personally have a problem with ES Modules per se, but I do agree with the "I really dislike having to include every last identifier that exists in other files" part.
I'm a big fan of "global" components in Vue.js, which are "dependency injected" into all your components but in a totally transparent way. You can "just use them" in templates, with zero boilerplate. You also don't have to mock imports in unit tests, you just inject them on an as-needed basis. To me this is 100x cleaner and simpler, there's no room for accidentally running a unit test that actually tests multiple components.
The "irony" is that people avoid them because of the name, but the ones called "global" are actually definitely not global :/
React could also benefit from something like that. The `import { createComponent }` part (which is hidden but must be added by Babel plugins) is the worst part of the framework, and is an obstacle to having cross-framework components that work in React/Preact/Mithril/Inferno/etc.
> I don't personally have a problem with ES Modules per se, but I do agree with the "I really dislike having to include every last identifier that exists in other files" part.
Isn't this exactly the problem that esm's namespace import feature solves?
Not really. I elaborated more in the rest of the message.
IMO the language is fine, and imports are fine too, but the way they're used in popular frameworks isn't good programming. Again, IMO.
My problem is with the excess of transitive dependencies in JS files as a programming style, even though the language supports other more modern idioms just fine. It's the classic Banana-gorilla-jungle problem, to quote Joe Armstrong. One component depends directly via imports on dozens of others, and it can't be separated. This specific style needs lots of compilation trickery to mocking imports in unit tests, for example. It's like Java or C++ in the 90s: zero independency and maximum coupling of everything, killing all portability and reusability.
Every React component directly depends on React, for example. This was a design decision, and is not strictly necessary, from first principles. This is IMO bit of a regression in terms of framework design. Like I said, with injection this would be unnecessary.
IMO one doesn't need "dependency injection everywhere" or "interfaces everywhere" (like 2010s Java, which is also not good), but when you have such a good and simple abstraction like "components" in React or Vue, it is a bit of a waste IMO to not use them and just use the imports-everywhere strategy.
Vue got it right, IMO, with "global" components for example. But this is also shunned by the community (even though it is used a lot by the authors of Vue and authors of libraries).
You're editing your comment very frequently, so I'll reply to it as it appears to me right now.
> IMO the language is fine, and imports are fine too, but the way they're used in frameworks isn't the best.
Ok so I think we're talking about two different things. The RawJS author was saying that he doesn't like es modules as a language feature because he has to separately import each identifier from a package. I was pointing out that you don't have to import each identifier separately, you can simply use the 'import * as identifier from "package"' syntax (es module namespace imports).
You seem to be talking about a separate issue (dependency injection).
> The RawJS author was saying that he doesn't like es modules as a language feature [...] You seem to be talking about a separate issue (dependency injection).
Yes, that's why I said "I don't personally have a problem with ES Modules per se". Because I don't. But I dislike the abuse of the feature, and frameworks that force its use.
> Yes, that's why I said "I don't personally have a problem with ES Modules per se". Because I don't.
But you also quoted a criticism of ES Modules and explicitly said "I do agree". That's the part that threw me off. The commenter you were quoting (and agreeing with) wasn't talking about dependency injection. Earlier in the thread he said "I admittedly have a visceral hatred for ES modules", and this was him listing his grievances. Dependency injection and ES Modules are completely orthogonal.
I think your other comment where you admit that this is off topic was a more straightforward statement of your position, because you didn't quote (or purport to agree with) an unrelated argument [1].
I guess that it means that the actual rendering gets fully decoupled from the live, but hidden DOM tree within the WebComponent and that live DOM tree doesn't really matter aside from first render.
Thanks! That was my goal! I didn't release it officially because I also wanted to add hooks/extensibility, and that was a lot trickier than I expected. I cannot expect everyone to only use the provided tools, and extensibility was a bit tricky (there's a lot of low-level math operations going on).
I'm not totally sure what you mean by "it doesn't read the component's own DOM but instead gets the `.outerHTML`". Note that I am not a Shadow DOM expert and I made this a couple of years ago, but IIRC the reason I made it this way is that I wanted a lot of flexibility on the transformation.
It's not 1-component-to-1-svg-element, it's more like I might have an arbitrary N number of "HTML elements", which might render into an arbitrary M number of "SVG elements", some of which might even be global (<defs>) so not even in the same order as the HTML elements order.
The focus on "rational" value maximization on individual bets is doubly blind:
- First, you and I might have different utility functions which are based on valuing outcomes differently; if utility functions are different, there may not be a single "most rational" choice
- Second, a single bet is a single bet, but investing in a company or following a specific policy is more like staking a gambler to make a sequence of bets.
I appreciate this insight. A lot of the EA stuff really feels like Rationalists treading into philosophical problems where theologians and philosophers have been working for like, centuries, cocksure that everyone in the past has at best little to teach them.
hmm. doesn't stuff like this happen every time that chrome implements some interface but other browser vendors don't get on board with that specific version? doesn't this happen... kind of often? isn't it apparent that there's a risk of this happening whenever chrome implements something before there's standards agreement?
> doesn't stuff like this happen every time that chrome implements some interface but other browser vendors don't get on board with that specific version?
Yes. What rarely happens though is that Youtube gets immediately rewritten with the new tech.