Hacker News new | past | comments | ask | show | jobs | submit | thermodynthrway's comments login

Java Applets were before their time. It took JS/HTML the good part of a decade to catch up functionality and performance-wise


The real answer is star navigation at night and interial + sun guided during the day. Phone hardware is perfectly capable but ITAR says otherwise. There's a reason why every legal GPS receiver shuts down after altitude or speed limits are exceeded. Too easy to make weapons that fly in your window from hundreds of miles away, basically


From mynameisvlad link given below[1] the restrictions are to prevent intercontinental missile navigation not regular missile navigation. The official limits are a speed (1,000 knots) and an altitude (18,000 m), but many manufactures do an "or" test instead.

"In GPS technology, the term "COCOM Limits" also refers to a limit placed on GPS tracking devices that disables tracking when the device calculates that it is moving faster than 1,000 knots (1,900 km/h; 1,200 mph) at an altitude higher than 18,000 m (59,000 ft).[2] This was intended to prevent the use of GPS in intercontinental ballistic missile-like applications.

Some manufacturers apply this limit only when both speed and altitude limits are reached, while other manufacturers disable tracking when either limit is reached. In the latter case, this causes some devices to refuse to operate in very high altitude balloons."

[1]https://en.m.wikipedia.org/wiki/CoCom



Is it really so hard for a determined terrorist to build his own gps receiver? Or at least modify one to remove that limitation?


They don't need to. There is a great many of more cost-effective ways of causing havoc, and even if they really wanted to use GPS-guided missiles, it's probably a better bet to build slower missiles with consumer hardware, but build a lot of them - missile interceptors are very expensive. If a DIY $1k missile needs to be shot down by $1M Patriot, they'll be doing lots of damage with a couple of them, without even hitting anything.


Yes. Look at the ham hobby, hardly any of them can pick up GPS signals, despite the technology being literally everywhere


That's not true at all. You can use a $20 SDR card to receive GPS signals and and then can decode them on any commodity laptop, or a small computer like a raspberry pi.

e.g. https://www.rtl-sdr.com/receiving-gps-with-an-rtl-sdr-dongle...


Receiving the signals is only a part of the problem, and even getting a "fix" isn't all that hard. Maintaining an accurate fix, and at the same time refining away previous errors, on a fast moving vehicle, is a dark art.


Software architectures are the limiting factor here.

Every app on your phone cares about "Where am I now", but no app wants to know "Where was I 5 seconds ago, but with more accuracy than you knew when I last asked".

Neither android not iPhone have an API to allow GPS hardware to refine the accuracy of historic position locations.


Possible maybe but unusual. 99% of ham is narrowband, GPS is spread across a megahertz. Somebody goofed and made a TV tuner chip that could be used as an SDR at that sample rate, and I'm sure the powers that be arent happy.


You what?

Any sensitive Spectrum Analyser or SDR can see the bump in the spectrum caused by the GPS Signals. And quite a few amateurs have built homebrew GPS receivers.

Or they can buy a little GPS module and use the data stream for a huge range of projects.

Most modern digital Ham communication methods require a 10MHz feed from a GPS module to maintain sufficient time and frequency accuracy.


Too late to edit so replying to myself. I was mostly wrong. The GPS limits are much higher than I thought, and accuracy of INS is too low, although I still think that limit is artificial.


Yes the real answer is some impractical nonsense. Star navigation? Are you serious?


I beg to disagree. My phones have all gotten a lock on planes so far.


The limits are higher altitude and faster speed then you've ever been. If the Concorde was around you would see it firsthand. Weapons would be worthless against first world defences at airliner height and speed. Gps receivers shut down on weather balloons regularly, it's a known issue

The limits are real, encoded in public law. I don't understand how you could possibly disagree with me on this point


Ignore the comment above, obviously ignorance :) Thanks for explaining.


It's okay, I was wrong about what the limits are too. They've been increased a lot


The reason your GPS receiver does not work on a plane is likely because it's processing algorithms aren't tuned for those speeds. The popular Ublox NEO-6M for example needs to be explicitly switched to "airborne" mode.

The fact that you are enclosed in a faraday cage also isn't helping, but they make planes from plastic now, so this shouldn't always be the case.


Planes don’t usually reach the limits, but they exist. The limits seem to be 1200mph or 59000ft altitude.

https://en.m.wikipedia.org/wiki/CoCom


1200mph and 59000ft altitude


Unfortunately many cheap GPS chips use the (incorrect) or definition, not and. It was a pain to find one that works above 59000ft but very slow speed for a weather balloon project some years ago.


I like using a NSF funded mobile app "Flyover Country" [1] to identify features in the landscape when I fly. You can download the maps and data for you flight path a head of time and then use it in airplane mode on the plane. GPS works fine (by the window) and the app locates your position on the map.

https://flyovercountry.io/


Odd that you'd think that aircraft would be covered by these restrictions considering the fact that they use GPS to navigate.


There's not a ton of civilian craft cruising about at nearly Mach 2.


All satellite nagivation systems are trivially jammable. They use DSSS to spread out codes and resist jamming but signal levels are still so weak at the surface it's trivial to block.

Most phones are compatible with GPS(US) +Galileo(Europe)+Glonass(Russia). Most of them also report which constellation the locks are from. GPS status and toolbox on android is a fun way to see what you're connected to.

What we need, and will probably get soon, is inertial guidance based on laser ring or fibre optic gyros in mobiles. You get a location fix every week or so and it starts out much more accurate than gps.

The US system was first by maybe a decade or more so support is nearly universal. The other systems are largely copycat, purposely compatible with existing GPS receivers.

That's why it seems like we're relying on the US for GPS, they invented it and had a full constellation in orbit before anyone else even thought of it.

Of course we're on the internet which was also invented by the US govt so I'm not so surprised why "the backups for this backbone of the global economy have to be American" in reference to GPS at least


> What we need, and will probably get soon, is inertial guidance based on laser ring or fibre optic gyros in mobiles. You get a location fix every week or so and it starts out much more accurate than gps.

Corrections every week? This isn't possible given the drift rates of high end FOG or Lazer ring IMUs. A high-end marine-grade INS can cost over 1 million dollars. These systems will typically provide un-aided navigation solution drifts that are less than 1.8 km per day. This means that if the device were left stationary for one day, due to slight errors in the sensors and imperfect sensor calibrations, after integrating the position solution, the calculated position after one day would be 1800 meters away from the sensors actual position.

With ones that you can affordably put in a phone within minutes the drift will be huge. You need something to regularly correct for the drift and currently this is GPS.


According to the paper, one nautical mile per day was state of the art in 2006. They are shooting for one mile/month now

https://link.springer.com/article/10.1134/S207510871401009X


Thanks I'm interested to take a look in detail.

From the abstract

> In 2006, we presented at DGON symposium in Stuttgart [2] the design and navigation results of MARINS, the first FOG-based navigation system within the class of 1 nautical mile per day. This navigation system in now in production...

> have we reached the limits of the technology or can we still improve the performance of our sensors?

> Of course, the present FOG design is not good enough for the required performance, even in a strictly controlled environment.

This is a discussion of how it could be improved not what is available in production and certainly not close to being available within a phone - which was the original point.


Fibre optic gyros can be miniturized to millimeter dimension, if it wasn't for ITAR.

High end civilian IMU's typically use mechanical gyros which have been obsolete for decades. Also, a phone isn't typically moving constantly like the oceans so error rates would be lower.


This is drift for a "static" million dollar marine grade (i.e. highest grade we currently have) INS not in the ocean or moving. Drift is measured in non-moving conditions. These are the fiber optic and laser sytems you are referring to.

Even if we could make that cheap and small enough it would still need regular corrections far more frequently than a week to be as good as a GPS is now.


You could use some smart heuristics to make corrections without a gps lock, like resetting the location to "home" if it is near enough and sits motionless over night, or making an adjustment whenever the location drifts too far from the known locations of currently connected cell towers.


You won't get close to GPS accuracy with these. The uncertainty from cell tower triangulation is huge (relatively). But yes, some kind of beacon that works like GPS on a local scale or detection of known mapped landmarks could be used for corrections but there are issues with these too. These could assist GPS location rather than replace it entirely.


> These could assist GPS location rather than replace it entirely.

Of course, that was the proposal. There's more datapoints if you're willing to get creative, wifi networks (already used for this), cooperative comparison with other mobile devices in a local meshnet, acoustic cues from the environment, machine analysis of captured images, etc. Obviously dead reckoning without gps is going to require a multi-pronged approach.


> cooperative comparison with other mobile devices in a local meshnet

Please explain how this will work?


I could imagine a secure location service that allows your phone to compare its current expected position with other nearby phones' expectations of their positions. If it's over bluetooth or wifi, the positions should be within meters of each other. This could provide an input to a kalmann filter type position estimator to help reduce drift as you (for example) walk down the street.


This doesn't make a lot of sense tbh.


What do you mean?

You don't think such a system is practical, or you don't think such a system is technologically feasible?

Or you just don't understand the system I'm describing?


I think there are both feasibility and practicality issues with the cooperative estimation scheme you are describing. Without already knowing where the phones are very accuratly there will be a lot of noise. Would need a lot more detail to really understand what you intend but first reaction is that it'd be very difficult to do well.

If you really care you could sketch out what exactly it is and how it'd work for yourself for a couple of devices (or more) and see what issues you uncover.


You might be right but I have doubts. Most weapons are INS guided despite the long fly times of cruise missiles etc. ITAR has a massive chilling effect on development, I wouldn't be surprised if we had error rates of less than a meter a day in mobiles if development wasn't severely curtailed


This works the other way - you get to prove your assertion that inertial navigation could work with cheap miniaturized sensors. Anyone can cast a doubt without proof.

If your acceleration sensor is off by 1 part per million, 9.8 m/s^2 (i.e. gravity) will turn into a positioning error of ~73km in one day.


Cruise missiles combine (using Tomahawk as an example) GPS, visual terrain-matching, radar terrain-matching, and INS. Because they know INS needs those constant corrections.


> Most weapons are INS guided despite the long fly times of cruise missiles etc.

And because of a long fly time or imprecise initial reference point (a submarine is floating) some do corrections. One of the coolest one for ICBMs is to use celestial navigation to correct errors. They'd have a window with a camera and would "look" for a few stars.


Drift is over a given time, not distance. Missiles generally have an very short fly time, even if they're going really far.


Cruise missiles are very different from (quasi-)ballistic or anti-aircraft missiles - they fly at subsonic speeds at low altitudes (usually using a turbojet) to avoid interception. For example, the classic Tomahawk flies at ~900km/h, with the long-range variants having a range of 2500km, giving a maximum flight time on the order of hours, and so a pure INS drift on the order of low hundreds of meters.


sensor fusion combines multiple sensors with different characteristics such as:

- GPS; widespread, low accuracy

- INS; always available, high short-term accuracy, terrible long-term accuracy

- terrain-matching: large-scale corrections.

The different characteristics allow one sensor to correct another to a degree to produce an overall stable position.



Not sure what IMUs you're using, but we've been using civilian MEMS and FOG IMUs for years now. You still can't make anything purely inertial good enough to keep position accurately enough to be a GPS replacement for more than a few hours.


Plus the gyros would have to be powered at all times -- that's not gonna work on mobiles.


So this is why my smartwatch can tell me my speed before it has a GPS lock, including indoors? Very cool! But of course, as you say, the reports of my absolute position are very inaccurate.


Velocity can come from dopplar shift of the GPS carrier frequencies. This is very accurate. An IMU can be used for walking speed estimation but it won't be as accurate using a human gait model or direct integration.


No, any position-solution produced before a GPS lock will be due to the approximation produced by either using the last-known position or the cell-tower triangulation. That information is also used as a seed for the GPS lock to speed up the time-to-fix.


> Most phones are compatible with... Galileo(Europe)

The system only went live in 2016. It's only been supported by Apple since the iPhone 6s, and Samsung has supported since the S8.

I would be hesitant to say that most phones are compatible. In 2016 most of the flagship phones had support but even having a compatible SoC doesn't mean it was implemented. The Google Pixel was released in 2016 with a Snapdragon 821 that is compatible yet the Pixel does not support Galileo.


There were predecessors to GPS: https://en.m.wikipedia.org/wiki/Gee_(navigation)


> What we need, and will probably get soon, is inertial guidance based on laser ring or fibre optic gyros in mobiles.

Isn't this precisely how accelerometers/gyros in mobiles work?



Thanks! I didn't know about those.

For some reason I was convinced phone gyros use light interference in a spiral of optic fiber. Probably because I read about that design when I was looking how solid-state gyros in RC models work.


Not a big believer in traditional conspiracy theories. But, there's a lot of evidence that the US military has a lot more than they let on.

I mean, GPS, the internet, strong encryption (look up history of DES). Much of what we take for granted came from black or semi-black US military projects. Leaked out I guess, sometimes literally.

God knows what's out there now, but UFO sightings are a reasonable place to start. Hypersonic transport. Plasma stealth. Nuclear engines. EMP, railguns and laser weapons.

What's made public is a tiny fraction of what's already out there. Terrifying but fascinating at the same time.

Funny that we already have weapons powerful enough to end the species but advancement continues.

Most people don't realize, but patents important enough are, and always have been, immediately been appropriated secret. A little known purpose of the patent office is to keep the lid on anything truly ground shaking.

And that's where my conspiracy theories start. There's so much out there kept from the rest of the world. I guess in 20 years or so we'll know what they've got now


Here in the Pacific Northwest there's secret project that I estimate thousands of people know about by now based on the scope of it - I found out because I'm doing the IT work for one of the subcontractors. The project is almost done.

There's nothing about it on the internet. Absolutely nothing. Not even a hint.


Not surprised, but you might want to delete this. If there's nothing on the internet but this post I would keep quiet.

I use long lasting throwaways but it's trivial to find me. If nobody has heard about something you haven't either. As much as I don't like mysterious technomagic I understand the reasons, I'm just really curious

Edit: just for the luls I'll guess that it's a giant radar/bunker complex. Russian/asian nukes tend to come from the Northwest amiright. Probably in Oregon or maybe Alaska. Either missle defence or early warning


Sorry, I'm not going to spill the beans. I haven't even told my wife. My main point is that there are secrets that can be kept by large groups of people - I'm am frankly surprised myself, but I guess I shouldn't be.

Have Blue was kept well hidden for a long time for example - it's hard to imagine for me this was flying around during the Disco era:

https://en.wikipedia.org/wiki/Lockheed_Have_Blue


Exactly why I said 20 years! The hopeless diamond. I would still delete this if real. Even though you didn't give away anything of substance it's enough to be marked a traitor


Devil's advocate, but I have a feeling black people, at least in the US, are driven to look "hipper" from systemic racism.

When society is generally more suspicious and condescending it makes sense to pay more attention to how others precieve you.

I'm a fat middle aged white dude. My superpower is blending in with any crowd. I've noticed that most friends who aren't "white" looking pay more attention to their appearance IMO.

I appreciate the effort but it's a damn shame that this is a "thing" in America. Would be nice to see a study done in a country with less historic racism like Brasil


I don't even bother with vanilla JS. Babel and Typescript let you magically use the latest features without caring about compatibility at all really, except CSS


People often promote this but there’s a huge gap between script tag to include jquerg.min (for example) and requiring the entire node/npm ecosystem to cross compile etc everything else (provided you weren’t already). This is a high tax to pay.


Typescript is super easy to bring into a project. Just rename .js to .ts and turn the validations off. Any valid JS is also valid TS. Build step is one command to run tsc. You can turn things on slowly as you refactor into idiomatic typescript.

React is a PITA to setup and learn, as is Angular. They're the modern equivalent of Java EE. "Heavy" but if you have something complex to do and know your way around they make hard things easy. The learning curve is tough but so worth it


How are these ("vanilla JS" and "the latest features") different things? By vanilla JS, do you mean prior to 2015 JS?


Transpiled JS specifically. JS is the only popular language with a great deal of differences between versions and runtimes, mostly due to rapid improvements. If you use "vanilla" ES2017 without transpilation you're in for a deluge of bug reports.

Instead of worrying about browser Y supporting feature X, I just ram everything through Babel or Typescript compilers and use all the newest features.

We haven't had a single browser specific JS bug since we started using Typescript on ny project last year. And async is so damn nice.

Babel's output is a bit obtuse sometimes, but Typescript was designed to output idiomatic JS, so even with no sourcemaps code is perfectly readable before minification


How big is the compiled output?


In my experience (compiling TS + babel down to ES5), the compiled output is generally fine. Unless you're trying to squeeze every last kb out of it, asset optimization and NPM dependency bloat are much bigger optimization opportunities anyway.


Haven't looked at Babel's, but the output from Typescript is human readable and not much bigger than the original. The more polyfills (lower your target lib aka ES3 vs ES2015) The bigger the output.

Even targeting the oldest crappiest browsers doesn't add that much. Some maybe 50kb for polyfills injected and code bloat of 30% ish. Not much price to pay to use the latest features without caring about compatibility


If you have even medium complexity you're better off using something like React with Babel or Typescript. Much more sane way of event handling than JQuery, and no mucking with HTML by hand.

I agree with JQuery for the most simple sites, but React+transpiler will give you much better compatibility


It sounds like you're saying using jQuery and a modern component-ized front-end framework are mutually exclusive. jQuery and modern front-end components can go together very easily -- albeit not specifically with React and Vue.

Our company built an in-house front-end framework heavily inspired by React and Vue. The components built using the framework work with or without jQuery. No manual DOM manipulations, one-way data binding, sane state management, templating, etc. Many of us prefer to use jQuery because of how compact and expressive the syntax is.

It really surprises me how many people think that jQuery === old school manual DOM manipulations. It definitely doesn't have to be that way.


Thing is react is only compatible with react ;-)


It's extremely easy to use React in small parts of the page and spread it out over time. It's a couple lines of code to init React in a div and fully supported. You can even communicate between React bits spread across the page.

React also let's you mark areas/divs within its domain as "don't touch this". So in those areas you can use things like that old map widget you love so much, without resorting to iframes

Angular is a huge fail in this regard. Zero support for mixing with "non angular" pages.


All what you describe in this commentary is much more easy to implement with just jQuery. Your logic more looks like "I can do it with React so it means I should and I believe it's the best tool".


Until you’ve got some moderately complex state and infrastructure to handle it, where you end up recreating the whole State -> Render loop of React in jQuery. I recently did this. It works, well, and with less bugs than most hacky jQuery DOM modification, but by the end I was pining for any other modern view library that already solved this. Same reason I used to pine for jQuery on some projects.


Can you elaborate on this? In my limited experience with React, it takes over your whole application, or at least the whole page. How do you mark areas as "don't touch this"?


I'm not sure on the "don't touch this" part, but in regards to "taking over the application", it certainly doesn't.

React's entry point is the `ReactDOM.render` function, which takes some React elements, and a root node. that root node can be _any_ DOM node (Actually no idea if you can mount React into a inline SVG, but at least any _HTML_ node you certainly can). You can also have multiple `ReactDOM.render` calls, no problem at all.

That initial React element can very well take values and callbacks from your Angular or whatever application.

It can get messy if you nest `ReactDOM.render` within roots that are managed by React, but so would nesting any other UI framework, but the use cases for that are... Exuberantly exotic, to say the least.


So, the way to look at it is that you “mount” a React application at a specific part of the DOM. Anything under that point is managed by React components.

Usually, people have a single “mount point” very close to the <body> element, and a single script file whose primary purpose is to call ReactDOM.render to “mount” a React component representing their application at that point.

However, that doesn’t prevent you from having DOM outside that mount point, and it doesn’t prevent you from having code which does other things that don’t have anything to do with the React-managed part of the page. This is a great way to migrate a legacy application to React: you choose one small section of your page to replace with React, and then have the remainder of your page interact with it by 1.) choosing when to call ReactDOM.render again to ask it to update, and 2.) passing in callbacks so the React ”mini-app” can notify the rest of your application when something happens.

You can go the other way, too, although it’s a little less safe. If you have a React component which renders, say, a <div> with no children, you can get a handle to the DOM element itself and interact with it just like you would in a non-React application. The only constraint here is that you have to make sure to clean up e.g. event handlers and release references to those elements if something causes the “owner” React component to unmount. This is how people make wrapper libraries which let you use non-React-based libraries using React components. (I’ve done this in a side project where I had a huge number homogenous DOM elements to work with, and found that managing them myself was much more performant than trying to run them through React’s model.)


i have something like that in one of my projects, there is "inbox" in one of my apps, it is handled with react, and the header, and top menu are outside of react and generated from backend. it works great. the only problem i had is in the developing, the thing that i did is i developed the inbox feature separately and then im copying the dist to my app on compile.


I haven't had to do so much myself, but the docs have decent info on how to obtain it. https://reactjs.org/docs/integrating-with-other-libraries.ht...


React components mount to any element you want. And everything they do will happen within that element if you are using react conventionally. There is no issue mounting components alongside/inside legacy apps.


Angular is a nightmare for complex UI interactions for components that are spread out.

I'm sure experienced people know how to do that properly, but it's really difficult starting out.


Angular's learning curve is the worst I've ever seen. We mostly use it because "Enterprise" but I prefer React when we get permission for that reason.

Too much abstraction, too much magic. It's possible to do just about anything but the docs are sparse enough that you'll be crawling through github PR's and source to figure out how to achieve it sometimes


Angular is an amazing tool for building reusable components and interoperation between them is pretty easy to implement (just use "input/output" handlers). I'm not saying it just to say something opposite, it is really my opinion and it's what I do almost every day.


Can you elaborate on this? I've integrated React into Angular and Ember projects in the past with no compatibility issues.


Did the "foreign" elements stamp out react elements or interact with their properties?


I tried not to have multiple frameworks controlling the same DOM elements. It would usually be a React component mounted to a DOM element controlled by a component in another framework.


Exactly, that is my problem, if a react component state is altered by "outside world", weird things can happen. So you actually avoided interoperability between solutions.


why not Vue?


"something like React" includes Vue.


React is great for rendering (and updating after initial render), but if your app is highly interactive then jQuery makes things easier even when using React for rendering.


You generally don't mix react and JQuery. The event system + JSX makes jQuery obsolete.

Having worked in complex interactive React/Angular apps as well as ones using JQuery, nobody would ever offer me enough to use JQuery again for this use case. They would have to double my pay for all the pain


Show me a dropdown menu written in React and I'll show you how you can simplify the code significantly by using jQuery in addition to React.


Development with React is becoming similar to old school event driven desktop apps, thankfully.

If I want a fancy drop-down with accessibility support and theming etc, there's probably 50 different libraries with components I can pull in and use with little fuss. A few lines of code to import and register event listeners.

I was a huge user of jQuery for years, and it also has quite nice UI libraries. But the power of React is the standardized scaffold and lifecycle. I can pull in UI components from almost any library and use them intuitively. Mix and match them on the same page.

Trying to do that in jQuery will quickly drag you to hell.


>Trying to do that in jQuery will quickly drag you to hell.

Can you elaborate on this please ? I've used lots of libraries with jquery and haven't encountered problems.


Easy to cause it by combining bootstrap components and JQueryUI.

In React you can "scope" a components CSS so none of the styling leaks out


Can you be more concrete?

If you want some specific code to criticise, there's dozens of dropdown menu widgets available: here's one I found in Google https://github.com/react-component/menu since I've never actually had reason to use a js/html drop down menu widget.


Here's one that uses jQuery. See how much simpler it is? The one you found does have more functionality, but try rewriting the one I found in React without using jQuery. I guarantee it will be much more complicated.

https://github.com/wisercoder/uibuilder/blob/master/SimpleDe...


It’s interesting that you find this “simpler”. One of the great advantages of React is that is declarative, it’s easy to understand how a component will render given a set of (props, state). Your example is quite the opposite, the render method violates this principle and each event handler manipulates the DOM using a ref. I’d call this spaghetti-React.


Did you miss the fact that this component is interactive? You can't do interactivity declaratively.


By declaratively I mean that in React you typically declare how an element should be render given the current (props, [state]).

For example, in vanilla JS, you might have something like:

  const btn = document.createElement('button');
  btn.className = 'btn red';
  btn.onclick = function(event) {
   if (this.classList.contains('red')) {
     this.classList.remove('red');
     this.classList.add('blue');
   } else {
     this.classList.remove('blue');
     this.classList.add('red');
   }
  };

In React instead:

  class Button extends React.Component {
    state = { color: 'red' }
    handleChange = () => {
      const color = this.state.color === 'red' ? 'blue' : 'red';
      this.setState({ color });
    }
    render() {
      return (<div>
        <button 
           className=`btn ${this.state.color}`
           onClick={this.handleChange}>
        </button>
      </div>);
    }
  }
I find it simpler to understand, because render, given its state, describes exactly how the UI should be.


This only works for simple cases. Where this breaks down is when you have to inspect the current state of the DOM before deciding what changes to make to it. Example: scroll an element into view if it is not already visible. More examples: drag & drop, interactive resize, etc.


It's not about how complex the component is. In my experience what matters is how easy it is to use. With React it's easy to build a world in a teacup, and that's mostly a good thing.

For "regular CRUD developers", the vast majority of us, I have zero concern about how big or crazy component code is until it's so huge it impacts page size. I'll use the easiest most feature complete library out there.

React does encapsulation like jQuery never could. I don't care if I have this vanta black box on my page as long as it's easy to mess with and gets the job done


"I have zero concern about how big or crazy component code is until it's so huge it impacts page size." Me and my mobile browser would like to have a few words with you. Not everything that can execute JS is a supercomputer...performance still matters.


You don't care if the code is significantly simpler by using jQuery? I do care. Simpler code is likely to have fewer bugs and is easier to maintain.


I'm talking about my code specifically. With React I can pull in a massively complex UI component in few lines.

The complexity is shifted to the library that hopefully has thousands of users so is mostly bug free. It simplifies my code at the expense of moving complexity to the shared project. Much preffered since my code will see orders of magnitude less usage than the library itself


I care debugibilty and maintainibility. Many projects on github have very short life cycle. If it go unmaintained, how easy can I take over it?


I would argue that most websites have a lifetime similar to the components :). For the ones that don't, backwards compatibility on the web is amazing, only thing to worry about is security vulns


You're correct that simpler code is better, but here's my argument: Large jQuery applications are not simpler, have more bugs, are harder to maintain and have poorer performance than React applications.

jQuery is a DOM manipulation utility library. React is a view rendering library, favoring composition and isolated components.


What do you even need javascript for in a dropdown?


How else are you going to add back in the accessibility you've broken by building it with JavaScript?


For filtering as you type?


you are seriously mistaken about react then. The whole point of MV libs is that you maintain a model (domain and/or UI) and the UI just redraws itself. JQuery is 100% not needed even for the most complicated React apps.


When you have a highly interactive component you're going to need to make a lot of DOM calls to get the current state of the elements and so on. jQuery makes this easy. The benefits of jQuery over making raw DOM calls have been mentioned by many people here already, so I won't repeat them. These benefits of jQuery don't go away even in a React app.


The worst I've seen is that Google serves an ancient version of their homepage to Firefox mobile for "reasons". With an extension to spoof useragent to Chrome, you get the regular page and it works fine


Used to be the case with Google Maps as well, Chrome user agent would get you served with a faster version.

The worst I've seen though isn't by Google themselves. It's user-agent whitelisting on other web sites, that think it's okay to prevent users from using the browser they want because they couldn't spend time testing in the other browsers. Sometimes they even exclude Chromium. For that reason I have an user agent spoofing extension on each of my browsers.


IMO, Google put a new stage on embrace extend extinguish I call "neglect". Make everything open source and nice, then continue to update G specific functionality by simply not maintaining good support for anything else.

Google probably has the best engineers and greatest minds since IBM in the 60's, possibly ever. But great engineers are attracted to shiny objects, and if all of those are Google objects there's suddenly no reason to use anything else


Probably a good time to mention that apples processors have such better single-core performance that real world internet speeds are usually much faster.

For example, my buddy has fiber and my Android CPU maxes out at 400mpbs. He gets close to 600 on speedtest on an old iPhone 5


After Google threw half of Android into "Google Play Services" I don't believe it.

> Then anyone can code implementations to that interface for whatever cloud they like in separate repos, similar to the way `database/sql` is structured

Of course they can, but you're working on a Google-centric version first right? Otherwise why would it be at Google Cloud Next?

That makes all other cloud providers second tier, and I don't think you should call yourselves "platform neutral". If you were, you would code up multiple platforms from the beginning, like is normally done for support on different operating systems for example


(Eng Manager for Go Cloud) The APIs and HTTP server released today all work on AWS as well as GCP and support for Azure is a top priority but didn't make it for today.

Please see the comment from our PM regarding Azure: https://news.ycombinator.com/item?id=17604358


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: