Hacker News new | past | comments | ask | show | jobs | submit login
What the f*** is the edge? (arcentry.com)
269 points by wolframhempel on Aug 12, 2018 | hide | past | favorite | 182 comments



> Cynics might of course argue that we have come full circle, from thick client to butt back to thick client, but that would miss the point of what "the edge" is all about: In a world of increasingly ubiquitous computing power we are well advised to reflect on where our computation happens and how we can make the most efficient use of the resources at our disposal.

Which is exactly why we the cynics say we've come full circle.

Also, while the cyclical nature of client/server design seems to be a thing, unfortunately underlying ownership is not cyclical. For instance, a cycle or two ago, the "edge" computer meant software I bought running on my machine. This cycle, it means software I don't have any stake in running on machine I lease and have little control about.

This is the part of the trend that's worrying to me. Cycles of thin/thick clients are irrelevant. Disenfranchising end users is a problem.


In case anybody else is confused like I was: there is a plug-in for browsers that turns all mention of “cloud” to “butt”. Parent apparently has this plugin installed, and thus the “to butt” part is “to cloud”. (Users of said plugin will see me just saying butt a lot here :P )


Thanks, I re-read that multiple times and thought I missed a rapid shift in nomenclature.


I believe it also turns "the cloud" into "my butt". It's a pretty amazing extension, actually.


Interestingly enough I didn't even notice it until reading your comment. Looks like I have a "butt" to "cloud" translator in my brain.


I know I have one, that's why I never notice when the extension is on (I have it installed in Chrome, but I spend half of the browsing time on Firefox these days).


Indeed, edge computing for some is the answer for „privacy”. Nowadays the native or mobile app, although running locally will still not even launch without the internet because of the subscription licensing model, and one has no reason to believe the support and marketing that „all data stays locally”, unless one degugged the network traffic with wireshark or similar tool... and it might change with any version upgrade.


There are many circles in this industry.

flash/sliverlight then html5 then wasm.

soap then rest then json api.


The cycles you mention are each very interesting.

Flash/Silverlight/HTML5/WASM - this, to me, is a story of a technology being useful, then growing to give too much power to web publishers - who tend to abuse any given power - then reverting to a weaker technology in order to unscrew the web. Rinse, repeat - HTML5 is already rapidly approaching peak Flash, and introduction of WASM isn't going to help here.

Soap/REST/JSON API - you could call it a story of simplification, but I don't get how it even came to be. That is, how XML gained so much popularity, given that simpler and better tools for almost all of its uses were already available and known.


I still don’t understand what problems people have with xml. The json vs xml debate very much feels like the tab vs white space debate to me.


I suspect the reasons that XML ultimately lost are:

1. You could represent it as a nested structure of your language's standard lists & maps, which you already know the API of, which you could directly operate on. For dynamic languages, this is much faster to stick a prototype together, even if it bites you in the ass eventually. But by the stage, you've already chosen JSON.

2. It was less characters to manually type.

3. Particular uses of XML were extremely verbose. The S in SOAP stood for Simple, which looks ironic in retrospect.

4. In a time when payloads didn't routinely use compression, the closing tags could be a noticeable increase in size.

5. The vast majority of communication that now uses JSON didn't benefit significantly from XPath (people prefer to navigate data structures using their language features, not a generic API), namespaces, DTDs, XML Schema etc.

In about that order.

You could argue that much of this is superficial, and it is, but the industry has shown time and again that lowering the barrier to entry, even in ways that make little difference in the long run, usually wins out.


The sad thing about #5 is that XPath is epic and trying to simulate queries using it in almost any programming language without just implementing it is tons of grunt work :(. When I a trying to parse through some complex data structure I still to this day find myself getting tired after writing a bunch of nested for loops resulting in my saying "screw this", convering the data to XML, and making short work of the problem with XPath (which only got better and better in subsequent releases of XSL/T).


You forgot my favorite one:

6. Too easy to accidentally reinvent Lisp via xml.

I thought it was funny the first time I ran into an instance of it happening, which quickly turned into horror. Not only was the logic split, but it meant mentally parsing this:

  <If><RegEx string="{AZ*}"><Register /></RegEx></If>
I mean, it sort of makes sense...?


not that JSON is immune to this - mongodb's query interface is a reinvention of the same, but in JSON.


Basically it's a Markup Language, not a data transfer language, so it was far from ideal to use it as one. See for example the ambiguity between what should go into tag attributes vs. tag content etc. Also the eXtensibility makes things too complicated for many use-cases. You don't have to care about any of this with JSON.


XML started out simple, and then caught really nasty FeatureCreep and DesignByCommittee.


Extreme bloat of the format, which makes it in practice both wasteful and not human-readable.


For me it's about simplicity. Xml is more featureful, more difficult to write a processor for, and more difficult to write out and read by hand.


The X in AJAX is indeed XML. Quickly appeared though that XML is too heavyweight and redundant for the browser (XML DOM!) and even for the internet speeds at that time, while JSON is as simple as JS' object literal. Web 2.0 adapting JSON formed a critical mass so that later (almost) everyone and everything went (almost) full JSON.


> redundant for the browser (XML DOM!)

I don't quite get this part.

Both HTML & XML cater for DOM models. I think this is & was appropriate.

Browsers had support for executing XSL stylesheets because it made sense for a lot of use cases.


JSON is easier to use from Javascript


Json is less verbose than XML, easier to read.


> from thick client to butt back to thick client - did you build something that replaces "cloud" with "butt"?



Ooops, forgot about that extension running on the machine I wrote that comment on.



See? You do have meaningful control over your machine!

You are free to choose from a great number of useful and fun pre-approved apps that you love. The possibilities are limitless!

(Of course, you must choose the same apps as everyone else. You are a loyal citizen, aren't you?)


Are you the author of that extension? If not, why would you give full control over your browsing experience to a third party without any benefit?


There's a word replacer extension I used a few years ago to implement XKCD's entire list of suggested replacements, to my great amusement. Due to my predisposition I consider great amusement to be of tremendous benefit to my well-being.


I love that one. I want an atomic car.


We (Cloudflare) have largely stopped talking about the 'edge' for some of the reasons cited in the article.

1. No one actually knows where it is or what it means.

2. It conjures images of classical ‘edge compute’ use cases like the self driving cars in the article. Those use cases tend to be exceptionally specialized (or just silly).

3. Edge implies a core or origin exists, and maybe it doesn’t need to.

We would rather imagine a world where we embed the compute directly into the network, putting hundreds or thousands of points of presence within a few ms of everyone on earth. We imaging moving all the compute and storage there, not just some fraction which needs ultra speed. If it is easy to run code close to your users, and it’s affordable, there’s no reason everyone shouldn’t run everything as close to the consumers of the Internet as possible.


It conjures images of classical ‘edge compute’ use cases like the self driving cars in the article. Those use cases tend to be exceptionally specialized (or just silly).

Yes. You do not do hard real time over the public Internet.

(I've actually had the experience of driving a full-sized vehicle over WiFi. When we built our DARPA Grand Challenge vehicle in 2004, we had the test capability of driving it remotely over WiFi, from a control station with a game steering wheel and pedals. Worked fine technically, difficult to drive. 100ms of no signal and the stall timer slammed on the brakes. Self-driving in the "cloud" is not where you want to go.)


There's a project called wifibroadcast (https://befinitiv.wordpress.com/wifibroadcast-analog-like-tr...) that allows to pipe data between two WiFi nodes in monitor mode so there's no need for association.


You can do hard-realtime over the circuit-switched substrate of the public Internet, though. "Two tin-cans and string" (i.e. two modems and the PSTN network) still works surprisingly well for (low-bandwidth) real-time control across vast distances. You can even get a decent hard-realtime SLA (much better than that of wi-fi or cellular data service) when one of the ends is a cell phone on 3G with its headphone jack plugged into a modem.


An interesting reversal of the networking philosophy described in “the Rise of the Stupid Network” http://isen.com/stupid.html


3. Edge implies a core or origin exists, and maybe it doesn’t need to.

Maybe borrow from physics to call it the Egress Horizon.


Would you mind expanding a little. Google has led me to a brand of window maker, and to of course, black hole event horizons but I am not clear on the specific egress horizon.

I can guess what you intended but would be interested in the specifics.


Egress = output; Horizon = the border of control, where packets are "released to the world"


I assumed that egress horizon was some new black hole phenomenon with interesting properties about not needing a core/ central point.

It's a cool phrase though so we should definitely use it


Hah, maybe I should have just taken the simpler road and said it was just a play on "event horizon."


It is interesting to see that as technology advances, predictions of the past become more and more true, though they may not necessarily look like what the predictors originally thought.

In this case, I think of the network is the computer. It's been for a while, but not yet completely, but it's becoming even more so as time passes.

https://en.m.wikipedia.org/wiki/John_Gage


Slight devil's advocate episteme: what about an even-further generalization? How about: the market is the computer?

Not only is the network essentially imbricated with/in the market, but recognizing the market in the role of the computation also means recognizing the roles of all of the different functional components, not just the mechanical/computational "atoms". Basically, I think that thinking this way lets us treat supply-demand basically as a messaging layer.


That fails the sniff test. Markets are made up of agents that want things. Computers only want what humans say they should want, so computers are simply the agents of humanity.

You can't abstract out supply and demand, at least, not forever. Abstraction is something you do to things so that you don't have to cognitively deal with them, they're predictable.

Humans will act predictably for as long as it makes sense for them to do so. Your business model will eventually fail because nobody will want your product / service anymore. Or they won't want exactly how you're delivering it. Or they'll have decided that X company is more reliable.

Someone has to keep on top of what the people supplying the business with life-sustaining revenue actually want. It can't ever be a machine, unless those machines actually become sentient, which is a completely different discussion. The person can use machines, but they can't actually be a machine.

People spend money, not algorithms. People may delegate to algorithms, but it's always a person in the end.


I think there's some questions to be asked about this, but the main one is this claim:

> Abstraction is something you do to things so that you don't have to cognitively deal with them, they're predictable.

The word "predictable". What does it mean to be predictable? Are periodic trends with micro-variations considered to be predictable? In that sense, might it not be possible to Fourier-decompose some time-series into a sufficiently-"predictable" set of frequencies?

But actually, I'm not even sure if this qualification is needed. If an abstraction is basically just a way to treat a bunch of events as a single idea, then what if the abstraction isn't something telling you how you should act, but instead something telling you how you should react?

Another assumption, I think, is that the market is human, i.e. because there are affective dimensions and bounded rationalities in the economy that make it unpredictable. This is true right now, but is that inherent to the market? You can argue that even an automated agent has "humanity", in the sense that it is essentially imprinted with their creator, but how much longer will this hold? It's asymptotic, but won't there be a threshold at which the humanity of machines made by machines, with many steps removed, becomes negligible?

http://slatestarcodex.com/2016/05/30/ascended-economy/


If any other kind of agent can make market moves, then that agent has to supplant humans at the 'top'.

For example, the pet market is absurdly large. What drives this market? The actual needs of animals? Of course not. It's the wishes of humans that make that market as big as it is.

It doesn't even have to be that humans are smart enough to control the new market-playing agents. It can also be that human desire can so vastly dwarf whatever external agency has a wish to be in the market that the whole industry ends up catering the wishes of the humans.

Right now, human desire drives the market for robots. Can you even imagine a point at which robots could develop enough agency to even come close to supplanting the human desire for more and better robots? When does the tail stop wagging the dog here?

People sometimes forget the sheer scale of human economies. Sure, maybe your pet non-human agent might be able to command a few hundred thousand in market activity. Is it even going to be notable over all the other existing types of non-human agency already on the market, like pets and HFT?


> abstraction isn't something telling you how you should act, but instead something telling you how you should react?

A little bit OT: I don't think acting and reacting are separate things in the real world. We're always only reacting.


Thanks for sharing that view. I have a theory on the same kind of reasoning : marketing combined with instant purchase capabilities (online buying and fast shipping) transformed the mass consumers into testers. Items are built, marketed, discovered, bought, shipped and reviewed faster, which allows "experimentation" with consumers. The mass consumers market is a giant lab, paid by the consumers.


As with any emerging computational platform, there is always going to be confusion. For example, when Cloud Computing first premiered I can't count the number of times I had conversations with C-level executives, IT managers, and even engineers thinking that "The Cloud" solved their problem whereas in reality it didn't. With Edge and Fog Computing, we are very much in this early phase of ambiguity.

At present, the closest capability to computing within the network fabric itself is Fog Computing. Cisco's IOx-enabled line of products (https://www.cisco.com/c/en/us/products/cloud-systems-managem...) pair a high-end router with decent computational capabilities. Other companies, like Nebbiolo Technologies (https://www.nebbiolo.tech/) or Fog Horn (https://www.foghorn.io/) provide Fog platforms that are not as locked to specific hardware.

If you would like more information on how Cloud, Edge, and Fog differ, I'll shamelessly plug our blog post from earlier this year:

http://pratumlabs.com/blog/2018/01/what-is-fog-computing/


Thanks, very interesting article and great explanations based on examples. Very exciting space!


A fully mesh P2P computer cloud would be magical. Everybody is the edge that way :)


Read A Deepness in the Sky for a dystopian/cynical view of this technological advancement.


We’re building that right now: https://holo.host


Following that link I learned about an ICO and then had some marketing content. Honestly I wanted to learn from the link “how” you were going to change things- maybe technical content?- instead I got this flashback when a bunch of years ago you would see these land pages of gurus selling you books about how they got rich online...


An overview is available in the Green Paper: https://files.holo.host/2018/03/Holo-Green-Paper.pdf



Ethereum?


Agree with your points but we do need terms for various type of edge computing in our vocabulary to avoid another nebulous concept. How do you communicate with your peers - network-compute/point-compute?


Conveniently all of your users have 90s era supercomputers in their pants. Shame we're wasting that compute on nonsense.


Mobile devices' compute power is there to serve a massive amount of input/output components and sensors. If we start to usurp this computing margin for our applications, the devices will overheat and break, irritating quite some people probably.


You mean glitter and ads? Because sensors and I/O doesn't require much power; nowhere near as much as is available.


I mean all of it. There isn't a single cycle on my device that's finding truth or beauty. Is there on yours? If so please let me know so I can get that app!


> Conveniently all of your users have 90s era supercomputers in their pants. Shame we're wasting that compute on nonsense.

Supercomputers were designed and built to be used to capacity (otherwise they would be a waste of money). Modern mobile phones and laptops are not.


Valid points, i meet customers with vague definitions of edges and help them understand that in the age of hyper convergence, it is the least of your worries. Better focus on the stack that is going to run on the system.


> Cynics might of course argue that we have come full circle, from thick client to cloud back to thick client, but that would miss the point of what "the edge" is all about ...

You don't have to be cynical -- merely to have either been in, or have read about, IT trends over the past few decades would suffice to understand that it's all cyclical.

The cynical observation is that fashionable management trends and/or the market's eagerness to engage in ill-considered change as a convenient substitute for thoughtful progress, manifesting as regular vacillation between a centralised and a distributed approach to compute & storage, 'keeps us all in jobs'.


It doesn't necessarily have to be due to an ill-decision; thick/thin client cycling can be seen as the obvious result of network/compute cost effectiveness ration swapping. When network is cheaper, thin clients make more sense; when compute is cheaper, then it makes more sense to have it local, and thus thick clients.

And due to diminishing returns + economies of the scale, as the market starts investing in one, the other side starts to make more sense..


You may be right, but I'm wary of the implication that 'the market' is rational.

Also, if the market leans more towards network or compute (I'm not sure that's a useful distinction, but I'll run with it for now) wouldn't the economies of scale necessarily continue to accentuate the initial trend, rather than induce an oscillating pattern?


Data likes to accrete, almost as if it has its own virtual gravity.

Cycles oscillate some, depending on compute speed and network latency.

But there isn’t as much oscillation as people think. 80’s/90’s style non-networked personal computing was a rare exception to the usual rule of networked centralisation with smrt-ish terminal access.


You're right, my mistake; economies of scale would keep the trend, and diminishing returns dissolves it


We already reached full circle when went from mainframe to cloud.


Indeed. I make a point of avoiding the c-word entirely, as no two people agree on what it actually means. Plus I started my IT career maintaining a mostly leased line (point to point) network infrastructure, coming into mainframes via FEPs.

The applications delivered by those systems, traversing low-speed (9600bps) links over many hundreds of kilometres felt (and probably were) more responsive than the HTTP-based applications that have, 30 years later, mostly replaced them. Soon after that there was the trend to distribute file & print servers to every branch office, and maintain a breathtakingly large library of desktop applications. A blink later and I was at a Gartner shindig in 2000 where they were convinced the Sun Ray was going to be The Next Big Thing. (It wasn't.) Rinse and repeat.


One time, I got a telemarketing call from someone selling satellite TV or something like that. I kept him on the line some time, asking all sorts of enthusiastic questions about his amazing service. I then proceeded to ask him how high up his satellites were, and if they could be giving God cancer. He (obviously) did not have a good answer. I then disappointedly told him that I was going to have to run his setup by my wife, as she made the religious decisions.

This seems like that, modulo Poe's law.


When did people on this site start using the word "modulo" for something other than the modulo operator in mathematics?



The weird thing is I feel "minus" makes more sense. Isn't "modulo" remainder? Whereas you're talking about the part that's left out, rather than the part that is remaining?


It's not just taking away 1 x though; modulo is "if x weren't a factor"


OH, so you're saying in a % b = c it refers to b rather than to c? That would make sense... I always just read it as "remainder" in which case it would refer to c...


I don't think speech is mathematical, rather the opposite if anything. In a equivalent b modulo c the c is the modality, the property or condition we are interested in. The use in English is different to math's, because Natural languages are like mathematics modulo the rigidness. A category theorist would say isomorphic upto.

Here're two quotes taken from wiktionary

> Thus, the underlying structure which I would assign to Navajo will be identical, modulo word order, to the one that we found to be projected in all of the languages studied in chapter 3. 1990, Margaret Speas, Phrase Structure in Natural Language, p. 281

> Moreover, in the role of consumer, each individual (modulo his location) faces the same array of goods and services on sale to anyone who can pay the purchase price[.] 2002, Richard Arneson, "Egalitarianism", in The Stanford Encyclopedia of Philosophy

Mathematically, these expressions could be modeled in a vector space where "word order" or "location" are dimensions that are ignored. It's somewhat like saying: left and up are the same modulo rotation; [1 1 0] and [1 1 4] are the same modulo [x y lim(z->0)], if that makes any sense.


I had looked up "modulo" in a couple dictionaries and only seen the mathematical definition. I hadn't thought about it as being related to "modality", interesting.


You specify what's left over. Makes sense to me, though I can see how it would look backwards


Completely agree on this point.


I was used it in math undergrad 18 years ago, so it isn't an invention of hacker news


It’s been in the Jargon file for a few decades...

http://www.catb.org/jargon/html/M/modulo.html


You throw it in to sound smart. Has more to do with algebra/topology (think quotient group/space) than with arithmetic/number theory (n mod k).


Im laughing so bad!!


> "Edge Caching" it simply refers to the Amazon datacenter closest to whoever requests the site

I can't speak for AWS, but for GCP there are many more Edge and CDN points of presence than there are datacenters:

https://cloud.google.com/about/locations/ (flip to Network tab in map)

And in Google's case "Edge" simply means where you exit the public internet and get on to Google's own network:

https://peering.google.com/#/infrastructure

Although there are also "edge nodes" (caches) located within ISPs outside the Google network.

Disclaimer: I work at GCP, but not on networking.


It's still a "datacenter" though, right? Just a smaller one. It's not a "region" i.e. there's no other services available other than cloudfront (and I guess lambda@edge).

Or do you mean they might not be owned by aws (or Google)?


Not really. Google literally just sends ISPs some servers stick into their racks with other networking equipment. These are very "scrappy" compared to what people think of as data centers, but if a room with some network cables, an AC, and a rack counts as a data center... then sure, they're datacenters ;)

https://peering.google.com/#/options/google-global-cache

> Once registered and qualified by Google, we will send you a simple agreement for joining the GGC program. After you have electronically signed this agreement, Google will ship you servers that you install in your facility and attach to your network. Google will work with you to configure the servers and bring them into service.


From what I heard, AWS rents a racks in commerical datacenters for its Route53 endpoints. So while that's also a datacenter of course, it's not an "Amazon datacenter".


Data center is a meaningless term these days, and varies by context.

It can be multiple buildings over a few square miles forming a resilient "region", or a single building, or all the way down to just some servers running in some rented rackspace in someone else's colo.

All the major clouds rent space like that for their CDN and smaller pops, and sometimes they even start smaller regions that way until they acquire or build their own structures.


> real-world transmission times between e.g. London and San Francisco end up at around 150ms. That's [...] a catastrophe for self-driving cars needing to make realtime decisions.

Why in the world would/should a car in London be relying on a cloud computer in SF to tell it how to make real-time decisions...?


If you are asking this question you are clearly more informed than the sales rep asking about being on the edge.

The point wasn't that a car would ever do this. The point is that it would be absolutely ridiculous to do real time computation with such a high latency. Unless your question is rhetorical, there is no answer except "you would not do that"

Some people out there for sure have no idea that the cloud is still bound by the constraints of physics and spacetime and that there is latency based on how close your servers are.


People should stop coming up with such wrong metaphors. Real-time doesn't mean fast, it just means reliable latency. If you miss a cycle it needs to reboot or stop. But it doesnt mean that they are particularly fast. They are rather slow in fact.

A real time controller in car would be perfectly fine with 150ms cycles, it just depends what is controlled. An engine controller is fine with 500Hz, a GPS controller with 10Hz, Formula 1 has 10Khz controllers, ...

So if a costly central computing service is fine to control something in a car, it might be done. But obviously not as described in the article, they are not stupid. A central service would be async and not RT of course. Updating maps, traffic warnings, eg.


> People should stop coming up with such wrong metaphors. Real-time doesn't mean fast, it just means reliable latency. If you miss a cycle it needs to reboot or stop. But it doesnt mean that they are particularly fast. They are rather slow in fact.

This! In other words it is determinism. It should be highly deterministic. Most of the real time controllers even suggest to disable cache to be highly deterministic.


I think the better question is why should a car be relying on a cloud computer at all for real-time decisions -- what would that car do when it's in a cellular dead zone?


There are edge cases where it is possible. My employer sells Automated Valet Parking [0] where the video cameras of the garage are used to steer the vehicle. Of course, this is a well controlled space where you can ensure connectivity and the car will drive slowly so you can ensure safety.

I don't believe safe real time and 5G cellular networks can be combined at reasonable scale. The latency is enough for some use cases, but reliability/connectivity/safety is not enough. So we still agree here.

[0] https://www.bosch-mobility-solutions.com/en/highlights/autom...


While I don't want safety critical decisions to be made by cloud services so that my provider can charge me a premium to unlock even safer decision making, I think that it's pretty reasonable to start with full autonomy only on preprepared stretches of road.

If that was the deal then you could have connectivity built into the road or road furniture and if the car detected a lack of connectivity it should come to a safe stop or revert to manual control.

It seems like we could benefit from a technological principle of Subsidiarity. https://en.wikipedia.org/wiki/Subsidiarity


Most of the teleoperation companies at the moment. As far as I know. Worse still many are intending to gang onto the primary cellular infrastructure to perform the function. Which makes everything worse.


It's fine if you are going to have your cars remotely controlled with a latency > 150 ms while they operate autonomously as things happen in a shorter time frame.

But the other way around seems insane.

It's reminiscent of the way people have switched from "human drives, computer deals with emergencies" to "computer drives, human deals with emergencies" as though it was no big difference.


150ms is no big deal. That's on par with top-notch human reaction time.

The big deal is the lack of reliability on that connection.


> 150ms is no big deal. That's on par with top-notch human reaction time.

Comparing a single component latency in technology to human reaction time is a complete straw man i see far too often. (I'm not blaming you, you are probably just repeating it). 150ms _is_ a good human reaction time, but in technological terms that includes sensor, network, processor, network bus, actuator and finally any physical latency of action being achieved.

A better and contextually relevant comparison here would be: human pressing a button after light turning on vs autonomous vehicle turning it's wheels 1 degree after seeing a light turn on. I bet you they are very similar, and I would not be surprised if the vehicle was slower.

150ms of network latency is significant because it's on top of the existing latency in the _whole_ system that makes up the car, including the sensors, the actuators whatever bus' inbetween them all. When you stick it's brain in the cloud and add 150ms of latency it's like a human trying to drive a car through a VR webcam, it will make it so much harder, 150ms latency between action and sensor is _very_ noticable even in humans, this is the same reason why cloud based FPS gaming (as in piping the result of the render over the network) doesn't work... no one is going to stick the brain of an autonomous vehicle behind 150ms of _additional_ latency.


I evaluated those and left them out on purpose, because it doesn't change the answer. You can sample most sensors extremely fast, but let's say our camera is only 100Hz. 10ms there, 10ms for our processing loop, 0.5ms on each end for the LAN if you insist on it not being included in the 150, 5ms for a not-even-impressive actuator to affect the wheels... Now you're still under 180ms and you're still beating an average human.

Being 'noticable' is a totally different thing. A human can notice single digit millisecond differences in timing. With adding control latency, there are two main problems, and neither one affects a system designed for remote control.

One is that you're not used to it, and you have to undo a lifetime of learning to adjust. This goes the other way too, it would be very disorienting to magically make someone's body react 100ms sooner to every attempt at movement.

The other big problem is that your movements and button presses no longer line up with the audiovisual feedback. It's really disorienting, and is going to make you miss a lot of shots in that FPS. If you could detach your hands and put them at the other end of the link somehow, that problem would be a lot easier. You already understand how to compensate for the latency of your fingers. You'd still have to get used to it, though.


I would equate "lack of reliability" with "latency that is sometimes rather large".


There is a qualitative difference between a jitter of a few dozen milliseconds, vs suddenly getting no data for twenty seconds because something went wonky.


I think OP is missing the point here. As he being the sales rep, the only valid answer would have been "Yes, it's on the edge.".

It is a question dedicated to find out if you yourself think the product is good enough or how well you think it positions itself with the competition. Maybe it's just me being stupid here, but I find it funny how the whole discussion here has turned into a technical debate, which the original customer's point IMHO clearly wasn't.

There is no edge. You define the edge by what you believe in.


Author here: When talking to an investor you are only a "sales rep" to some degree. You are the owner of a company looking to raise funding by selling a certain percentage of shares, so much is true - but the mindset is more one of finding the right partner.

Given a promising enough vision and execution there's no shortage of interested VCs and the challenge is in picking the right one for the long term. All VCs offer money, but many offer value beyond that. As the VC(s) you'll choose will hold a significant stake including board seats and voting rights you'll want someone with a deep enough understanding of your industry, technology, market, sales or operational model.

Even if we were talking about selling to a customer here I would argue that one's better off providing long-term value (and forfeiting a deal if one can't) than just catering for any buzzword-driven expectation and watch the whole edifice crumble a few months down the line


I totally agree with your points here. But just as few others have pointed out, it looks more like a "shiny object syndrome" or it could also be a bit of that the customer is not an expert of the software field and feels like he/she's gotta bring something to the table to sound credible.

Your blog post sent me far into memory lane. I still remember a case where I was a starting entrepreneur and I was selling a website remake. The customer asked me if our CMS's database was relational. He clearly did not even have a clue what that actually meant, but neither did I :) I knew it was "most probably totally irrelevant" to the whole case (just as you seemed to ponder), but my uncertainty showed. I was maybe under my 20s then, though I was very confident I could deliver. But I was no salesman, more of an enthusiast geek with a vision.

What he was basically asking me was "why should I pick your product instead of all the others? Is this person credible?". It caught me so off-guard, that I kind of froze back then and wasn't able to bring the conversation to a level where our product would be compared to the competitors. Focus on the good things etc. Maybe he did it just check, if I was actually aware of any competition!

I didn't get that contract. I still remember it as a lesson learned in many levels and your blog post somehow reminded me of that moment. Can't really say if they are comparable, but there you have it :)

ps. My first thought today could be more like "what is this person scared of the most in this situation, where does this kind of question stem from?". A great sales rep might ask that question directly back. Cheers!


Replace "edge" with any other hot term and that's your IT market. Even without demand and even without innovation a lot of stuff is changed all the time. Current systems are centralized? Well let's decentralized everything. Current systems are decentralized? Centralize. Every. Thing.

Watching it as engineer can be especially painful, because you already know that the other thing is not that much better. And more often this b.s. starts with the optimal trade-offs and turns into exactly what was carefully avoided with good architecture and decision making.

Sometimes I really think some of the managers are sitting there thinking "Yes, I know that you are smarter than me. But that is your disadvantage. Just to get some leverage I will now push for the stuff you told me to avoid again and again over the last 3 years, and you can't beat me in that approach because everything in you is fighting against this."


I think the author was missing the point, but not in the way where he should lie. Edge in the context asked, with the explanation given, likely was refering to "bleeding edge", "cutting edge", or "leading edge".

As in the customer wants to buy modern technology that makes their company look hip, their developers happy, and gives them an advantage against the competition.

Some rant about CDNs, or where the edge of the network is, seems inappopriate here.

https://en.wikipedia.org/wiki/Bleeding_edge_technology


> When e.g. AWS Cloudfront, Amazon's Content Delivery Network talks about "Edge Caching" it simply refers to the Amazon datacenter closest to whoever requests the site.

Kind of? If by "datacenter" they mean the things they said earlier that AWS had 18 of, then no, not really. There are many more Cloudfront points of presence that there are AWS regions (over 100 of the former). In practice unless someone lives very close to a proper AWS region (like I do here in DC), odds are there's a Cloudfront PoP much closer to them than a "datacenter" (i.e., AWS region).


It seems a bit nit-picky, but AWS has ~18 commercial regions, but far more datacenters. It's availability zone that's a data center (more or less), not a region. So for each region you end up with 3 to 5 data centers or something like that.


Even a single AZ might consist of multiple datacenters.


This is correct. A PoP can very well be an ISP’s datacenter, even if they usually do not provide hosting / colocation services, because it’s in their own interest since it reduces their overall bandwidth usage.


> This front-layer of computing power is also referred to as "fog computing" or "dew computing".

The bit about self-driving cars talking to a server to make split-second decisions is laughable but this right here just makes you want to flip the table.

If you're requirements are so extreme that you need to be on "the edge" you can do that today, it's called client-side development.


People have been doing client-side development for millennia. That's ancient. Who wants boring offline technology that's locked down from communication with the cloud where hundreds - no, thousands of servers are using algorithms that are constantly improved with machine learning?!

I kid. Mostly. Most sane people would agree with you. However some people can't see the forest for the trees. Has "buzzword-driven development" become a phrase yet?


What separates fog from client-side is the concept of hierarchical fog nodes. You could imagine the car's sensors as the edge devices and the car itself as a fog node. The whole fog hierarchy would be something like: edge devices - cars - intersections - ... - cloud. At the intersection level collisions could be avoided by processing data from local sensors and warning incoming cars.


The anecdote at the start of the article seems to me to suggest the investor talked about "being at the edge of science" or technology - i.e. doing something disruptive and new.

But the rest of the article talks about the location of data and servers and it's relationship with the word "edge". Am I missing something? Why is the anecdote unrelated to the article?


(Author here) sorry about the confusion, I can see where you are coming from. But yes, the investor was very much enquiring about whether it is on "the edge" in a cloud computing sense. Having said that, it's important to stress that many - in fact, most - of the VCs I've talked to were amongst the most impressive, accomplished, knowledgeable and intelligent people I ever had the pleasure to work with.

There is, however, a (growing?) number of VCs with a purely financial background that approach investment decisions by establishing a framework of future trends/ developments (Crypto, Blockchain, Edge, Sharing Economy, E-Mobility and so on) and then vet potential investments based to how well companies align with these trends as well as basic suitability criteria (founding team, execution, traction etc.)

This isn't a bad thing per se as it might add a less biased view to investment decisions than the one made by the tech-founder-funds-tech-founder echo chamber, but it can lead to the level of detachment with the fundamentals of what one's talking about displayed in this article.


Thank you for the clarification. "On the edge" seems so strange to me when talking about edge computing/caching but I suppose I'm not in that industry.


I work in cloud computing and also find it strange. At most, I have referred to CDN "edge nodes", but I would've never said "on the edge", but "on the edge node". In other contexts, I simply would've referred to "the closest datacenter".


I thought the same thing.

I believe the author intentionally wanted us to believe it was "the edge" based on your definition, but then wanted to make a point that "the edge" is now "the physical edge".

Another explanation would also be that a couple years ago the stereotypical technically-limited VC would ask if this app would be in "the cloud". The author used the same stereotype and adapted it to "the edge".


> I believe the author intentionally wanted us to believe it was "the edge" based on your definition, but then wanted to make a point that "the edge" is now "the physical edge".

Personally, I immediately knew it's about cloud!edge, even when I saw the title. If you pay attention to current trending buzzwords (as I unfortunately do, skimming the things some people I work with post on company Slack), you'll learn that "edge computing" is the most recent buzzword in the cloud space.


I also got completely lost on what was the point. I thought the investor was talking about is it "sexy/new" rather than a technical definition of an edge in a datacenter..


Please fix the scrolling under mobile Safari, it has no momentum.


And chrome.


Thanks for the hint, will do (seems to be iOS specific, hmm)


Please just remove anything that interferes with the scrolling at all.


Should be fixed now (there's nothing trying to interfere with the scrolling, just an overflow setting to hide the hamburger menu that seemed to cause the laggy scroll on iOS. (Hopefully I didn't make it worse - redeploying while being Hackernews #1 is always a bit scary)


Much better now, ty!


Apple doesn't allow other browser engines than WebKit in its App Store, so Chrome on iOS uses WebKit, not Blink, and the problem is probably actually WebKit-specific.


Honestly anyone who invests on the basis on what layer of cloud computing is currently being used to solve the problem can't be doing too well.


If you are willing to embrace the term 'cloud' - as everyone here seems willing to do, including the Cloudflare (...) guy - then you have already admitted that you are happy with that type of garbage. 'Edge' seems trivial in comparison. If you really care about this bullshit then you should agree to avoid all these VC/buzzword/corporate/marketing bullshit terms.


Unfortunately, you need to live with those terms if you want money, because people with money use them. I assume they don't even care if a given buzzword has any relation to reality whatsoever - all that matters is whether the market (read: other rich and wannabe-rich people) thinks that buzzword is hot or not.


but it is not enough for cases requiring sub-millisecond latency and incessant availability: Fields such as IoT metric processing

Why would IoT metric processing be latency sensitive? Whether my thermostat processes the temperature in 3 milliseconds or 30 milliseconds seems immaterial.


true for private IoT, but the industrial IoT is orders of magnitude bigger and comes with a host of use cases where machines operate at 100 to 1000 Hz cycles - and for stopping your 20.000 rpm milling machine or opening your pressure release valve 30 milliseconds makes a lot of difference.


This seems like an exercise in quickly ascertaining what kind of audience you have. If you are talking with someone who needs to feel like they are buying "the edge" or whatever, adjust your sales pitch. Also, if you start off with a buzzwordy pitch and realize you're speaking with an engineer, switch to a more technical approach.


5G networks take heavy bet on the future of edge computing.

5G's secret weapon against other wireless technologies is reliable ultra low latency (up to 1 ms, at a low failure rate). It's enough for network automation in factories and factory robotics.

For customer applications it would make a sense to but graphics processors that can do VR, AR, games and machine learning inference very close to customers. It's and more practical for mobile users cheaper to get computing power as service than carry it around.


For the record according to this link, https://aws.amazon.com/about-aws/global-infrastructure/ AWS operates in 18 geographies (called) regions... These are not data centers. There are 55 availability zones (AZs) at AWS regions, and each AZ has at least 1 data center..12 more AZs at 4 more regions are also announced


Also, there are over 100 AWS CloudFront POPS:

https://aws.amazon.com/cloudfront/details/#infrastructure

And you can do compute in CloudFront POPS with Lambda@Edge:

https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.htm...


Only node.js available in Lambda@Edge


So annoying when people try to be fancy and break the default scrolling behavior. That's one way to get me to not read your blog.


The author has commented and it seems this is unintentional.


Thanks :-) - should be fixed now.


Glorious!


While this is an interesting discussion, it doesn't explicitly make the point that there's typically a tradeoff between network latency and processing speed. Even if I'm using some server with 500 msec network rtt, I come out ahead if doing whatever locally would take over 500 msec longer.


This is a strange use of the word "edge". Especially because, as others here have noted, it doesn't include a clear indication of the "core", aside from maybe large datacenters.

The older usages of "edge" are from the networking world, where infrastructure and technologies have a relatively sharp distinction between "core/backbone/trunk networking" (large trunk lines, intercontinental fiber links, etc.) and "edge networking" (last-mile telco infrastructure, routers handling one building or campus of a company, etc)

From this networking perspective, both the end-users local device and the centrally-hosted server are at the edge, though AWS will have more reliable and lower-latency connections to the internet core than your cell phone.


I think for most cases, "on the edge" means some devices on the sites of those IOT sensors/small collection machine installed. In compare to "on the cloud" which means on the server hosted by Amazon/Azure etc.


I presume that nobody sane is sending critical self driving car decisions to the cloud.


It was only used as an example of something you CANNOT do with such high latency. It was not implying some company out there is actually using a high latency server for real time processing


It's a pretty bad example of that, though. There are enormous problems involved in the idea of remote piloting a car, but a 150ms feedback loop is no big deal. It's slightly better than a human.

Especially if you look at the latency of one-server-per-continent, which is still extremely far from "edge": you get something like 50ms or better. That's easily good enough to control anything a human could control by hand.


Pretty sure there are systems where one would struggle to regulate with 20Hz loop that humans can do. At least with easy methods like linearized PID. For example an inverted pendulum


It seemed to be implying someone might want to, if they had lower latency. And the issue is how can you guarantee sufficiently low latency 100% of the time? You need a fallback, and if you have a fallback, why do you need such low latency?


I deeply hope it's a buzzword prank so that non-tech people and managers will start demanding "edgy software" and "edgy computing" and looking to hire "rockstar edgelord".


At the very least one hopes that managers will start googling for "edging videos".


The edge, as used most often in my experience, is the entry point that connects the public and private internet. It rarely has anything to do with location (in almost all cases it's implied the edge is regionalized to be the closest data center to the target).

For AWS: a Lambda at the edge, for example, may be a Lambda behind an API Gateway that acts as the public entrypoint into infrastructure behind a VPC. That Lambda is at the edge.


On AWS that’s a somewhat confusing way to put it since there is Lambda@Edge, which allows you to run Lambdas to handle/manipulate CloudFront CDN requests at their edge locations.


I tried to search "ETSI MEC" in hackers news and got no results.

https://portal.etsi.org/Portals/0/TBpages/MEC/Docs/Mobile-ed...

It is one of the definitions of edge computing, any comments?


For all the effort we've put into moving spreadsheets and financial data to the cloud, there's just no replacing the immediate feedback of a local Excel app when doing What-If-Analysis, Goal Seeking... or a dozen other financial tasks.

Keeping "the edge" in sync with a centralized system-of-record via intelligent Excel add-ins is beginning to look like our big bet for 2019 (and beyond?)


A gem on this topic: "On the Design of Display Processors" (The Wheel of Reincarnation) http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland...


Does anyone know what visualization tool the author used to create the AWS architecture diagram? It looks really cool.


Likely Arcentry. Another company in the space is https://cloudcraft.co/



What's kinda mind blowing is that this is an actual business. I can't imagine why someone would need this specifically for their cloud presentations. Like isn't visio enough for everything?


Wow! So the original story is written on the blog of a company that is creating Chart of clouds architectures (and probably being on the edge)? I'm amazed by how niche that business is!

also very nice plug to put your diagrams into the story


Right, I didn't even make the connection until the OP asked the question here.


It has been fifteen years since I had access to a machine with Visio installed, and my memories from back then aren't good memories. This seems more like a descendant of GraphViz than anything to do with Visio.


I personally don't use it, but I know nearly everyone in my company does. (F500, >300K employees)


What does "open-source" refer to in "Create beautiful cloud and open-source diagrams"?


As in it offers components that are unrelated to any cloud provider. Postgres, Nginx, Kafka etc.


https://en.wikipedia.org/wiki/Bleeding_edge_technology

Not sure why everyone assumes the question was regarding networking.


Why the hell would an investor care about some technical performance nuance that could easily be implemented via setting up some Cloudfront distributions?

Did she ask about your cache headers as well?


Is it really a self-driving car if it needs a network connection?


An article about the edge and no mention of Akamai. Last I heard even Amazon Prime Video was being streamed on Akamai, not on CloudFront.


Did the terminology come from "edge routers"? Those are simply the last router before the device which uses the network.


Wouldn’t a PHP app with app servers in locations in many POI then connected to azure cosmosdb essentially feel like and act like”the edge” to a user.

Nuxtjs and nextjs are “the edge” apps. But the server being centrally located especially for global ecommerce needs to go the way of the dodo.

Shopify being one. Some plus clients are near $800,000+ a year. Yet still no database or app servers outside the US.


Would be interested in how the talk with his investor continued...


guitar player from U2


Came here for this. Leaving satisfied.

Also, I can appreciate how some might feel like we are in a "fat client" world again and in some cases I am sure we are - it will depend on who is wielding the hammer as to whether it is used "properly" or the "most efficiently". I love how the modern cloud allows us to deploy computational power to the source of the data - if we so choose/need to.


FWIW, the Edge is a vehicle made by F∗∗∗.


Needs more edge-lord.


I too enjoy thick butts.


Investor makes business decisions based on marketing fluff, more at 11.


[flagged]


I remember hearing about something similar to this some time ago.

What happens during heat waves? Just wondering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: