We eventually tracked this down to some of the stabilisation algorithms – they work fine within the intended time frames, but there's an inefficiency in them that gets worse the longer the wormhole is open and eventually they can't keep up with the shifts.
An allusion to the accumulated rounding bug that caused the Patriot missile incident in 1991? That system was likewise intended to operate for only short periods.
In the Patriot missile, time was saved in a fixed point register that had a length of 24 bits. Since the internal clock of the system is measured every one-tenth of a second, 1/10 expressed in a 24 bit fixed point register is 0.0001100110011001100110011 (the exact value is 209715/2097152).
On the day of the mishap, the battery on the Patriot missile was left on for 100 consecutive hours, hence causing an inaccuracy of 9.5E-8x10x60x60x100=0.34 seconds.
The shift calculated in the range gate due to the error of 0.342 seconds was calculated as 687m. The shift of larger than 137m resulted in the Scud not being targeted and hence killing 28 Americans in the barracks of Saudi Arabia.
But, more likely, it's just a play on a reason a Stargate (when unaided by some hostile energy source or time dilation field) just shuts off after 38 minutes with no explanation.
However in Stargate the actual 38 minutes thing isn't a computer limitation it is a physical one. In the original show (i.e. SG1) Carter said it is "impossible" due to physics for it to stay open longer, and they also established that with enough raw power (e.g. blackhole, ancient device, etc) it could be kept open longer or near indefinitely, it just wasn't possible with the normal power the Stargate needs to operate.
Keep in mind they did exceed 38 minutes many times with different explanations each time[0] none of which were computer related. It is also worth noting that earth in the show built their own DHD which presumably would have different computer limitations than the ancient built DHD (and as they established when all DHDs in the universe died except theirs due to the malware).
The basic conceit in writing this story is that when you only have access to an area of physics through poorly understood reverse engineered xenotech, a lot of arbitrary constraints imposed by the tech start to look like physical laws.
> In the original show (i.e. SG1) Carter said it is "impossible" due to physics for it to stay open longer
In a show like Stargate, "impossible" is really just another way of saying "It would require overcoming a series of increasingly unlikely and insurmountable problems for that to be possible." Which, of course, they then use for a plot device in later (or the same!) episodes as they blow past those increasingly unlikely and insurmountable problems. :)
> Carter said it is "impossible" due to physics for it to stay open longer
At the beginning of the story it's implied that things like that are why Carter would be upset. Because the physics they had thought they had learned from the stargate hardware turned out to be software limitations instead. It's not like humans are ever capable of creating a wormhole to test for themselves.
Later on, when Carter gives a lecture on wormhole physics at an Air Force cadet academy, the teenage protagonist of that episode reacts in the same way, and Carter gives an "I thought that way once, but then I saw enough compelling evidence to change my mind about what's possible" speech.
> It is also worth noting that earth in the show built their own DHD which presumably would have different computer limitations than the ancient built DHD
The gate itself has control crystals as you can dial out without a DHD if you just give it power and some elbow grease. The 38 minute cutoff is probably in that code.
They must have run out of code space in the gate and had to offload some of the code to the DHD. :D The correlative update routine for the coordinate system is in the DHD for example so the earth gate could originally only dial Abydos until they added that to their homebrew DHD.
I suspect in many instances it's down to the method doing the printing being "helpful". The page documents this behaviour with Python. I've never specifically used C#, but it seems to default to the "G" format[1], the alternative "R" format shows the expected result. You can play around with it here: http://ideone.com/hxD96J
Printing out is kind of a misleading check here, maybe a better idea would be to test equality with the constant 0.3. Of course the site is just a neat illustration and not a technical whitepaper.
Yes, I agree this page is misleading. It makes it look like C# is using rational numbers instead of floating point. In fact the "G" format defaults to 15 digits of precision for doubles, which is few enough to avoid ugly strings in many simple situations, but not all:
The printing routine. Internally, none of them can store 0.3 in a floating point variable because it is impossible to do so in exponential notation with base 2 (which is what IEE 754 uses).
Some languages may optionally use an exact representation (e.g., Scheme or Python), but that's not the general case.
Do you find that the message broker makes that implementation easier/better than just using a database table to store the incoming messages until they can be processed?
I've worked on enterprise products, off-the-shelf and in-house, which use message brokers, and every time I could not fathom how the extra complexity was justified. If the messages were just queued in a simple database table instead, the implementations would have been much easier to work with: easier setup and admin (it's just part of the existing database), easier to check and monitor (just select counts from the table), easier to inspect the pending messages and fix problematic messages. In all cases I get the feeling the architects were just looking for an excuse to use a new technology.
I can only see high-load scenarios as a possible justification, where the volume of incoming message data is physically too much for the RDMS.
A database table is just a storage medium. The programming model is what is important. Some tasks don't fit the imperative model very well, some tasks have transactions open for days, some tasks need correlation across hundreds of different integrations, some tasks need to happen in guaranteed windows after initiated, some tasks need to be parallelised across lots of systems. These are a few cases I can think of.
From a current position perspective, we use MQs for quoting. Someone raises a quote, this is parallelised across 20 providers all with different integration methods, transformed back to a common object model, inserted as a single transaction into a database table, handed over to another system as an integration task which calculates risk, posted back from that to our system, a risk model is applied, the data is updated, the user is notified. This is all plugged into a 2 million line java and c# jumble that evolved over 15 years from a C++/COM nightmare.
We rewrote it with full test coverage in a month. We can add a new provider in an hour.
Imperative/table integration. No thanks!
People who distrust this methodology need to grab a copy of Enterprise Integration Patterns. There's a huge amount of wisdom in that book that puts Joe blogg's average dynamic language task queue to shame.
Without wishing to use the phrase, but enterprise software is about a hell of a lot of stuff people don't understand. This is probably 1% of it.
Many message brokers do use or support using a database table to store the messages.
What they tend to add is ease of scalability and failover, and higher level functionality such as some of what the author of the article complained about, like automatically marking the message as available again if the worker that was processed it did not give any liveness sign within X seconds to deal with hung workers etc.
I used it to build a LINQ provider for a legacy mainframe database system, abstracting away the incomprehensible table and column naming, weird date systems, EBCDIC, and other nastiness involved in querying that system using straight-up SQL.
One thing I found is that implementing a custom LINQ provider gives you a better understanding of how Entity Framework runs your queries. You get to see how queries get split up into the parts executed in the CLR versus the parts that get translated, and how expressions get rearranged and rewritten.
Internet Explorer doesn't automatically update because every company disables the automatic updates.
Developing for Internet Explorer 8 is my present-day reality too, but if IT departments were instead mandating Firefox 3.0 or Chrome 1.0 I'm not sure how much better it would be.
I don't get it. His first example is that since the string replace method takes a callback as its second argument, you can use it to run code. So if you happen to find an app that injects unescaped user input into Javascript, you could enter the string alert(1)".replace(/.+/,eval)// and have it run the alert.
But if the app is injecting unescaped user input into code, wouldn't a simple ";alert(1);// achieve the same thing? Why do you need to be tricky?
The examples seem to be based on the premise that web apps commonly forgo the simple escaping of quotes, and instead implement a complex parsing and analysis of user input as code to identify malicious statements, which can be defeated as long as you obfuscate enough.
The examples seem to be based on the premise that web apps commonly forgo the simple escaping of quotes, and instead implement a complex parsing and analysis of user input as code to identify malicious statements, which can be defeated as long as you obfuscate enough.
He gives an example of such a system.
The larger issue is that the premise behind the design of web browsers is itself flawed with respect to displaying user content. As long as it is acceptable to execute JavaScript that is embedded directly in a web page (such as within script tags or as the target of an HREF), and as long as you are also trying to display user-supplied content, security is going to be extremely fragile.
We are perpetually one careless oversight away from pwnage, and we are vulnerable to mistakes that might creep into our libraries as well as into our own code.
Why is it such a huge revelation that advertising is created to benefit a business?
You've got a beautifully produced and captivating video, which makes no secret about advertising something. People enjoy it and engage with it, but then suddenly catch themselves: "Oh no! What if this advertisement was created... for profit?"
To created purely to give pleasure to people is one thing, it is an act of generosity and altruism, if it is done to manipulate, then it is something very different. PR and advertising are manipulations.
The worst thing is to see something like this, allow it to emotionally effect one, then discover at the end that is is a corporate manipulation. Dunno about others, but that makes me feel used.
>>The worst thing is to see something like this, allow it to emotionally effect one, then discover at the end that is is a corporate manipulation.
This is literally the entire advertising industry's purpose. If you think any single advertisement is not solely intended to trick you into equating good feelings with a particular brand, product, or person then you are very naive.
There is no such thing as advertisement that is made just "to give pleasure to people", even when it comes from Google.
To help others decode the acronyms, and to explain the situation...
In 2009, the Australian Labor Party, in power as the federal government of Australia, initiated construction of Australia's National Broadband Network (NBN). The network is replacing copper phone lines across Australia with a new fibre to the home (FTTH), also known as fibre to the premises (FTTP) network, meaning optical fibre will be run around the country, down every street, through every front yard, and into our houses (except some regional areas, where satellite will be used).
Two days ago, a federal election was held. The Australian Labor Party was defeated by the Coalition, an alliance led by the other major political party in Australia - the Liberal Party. The Liberal Party's policy for the NBN is to build fibre to the node (FTTN) instead, meaning optical fibre will be run to cabinets on street corners, which will service small areas, but the existing copper lines will still be used for the "last mile".
The petition is to convince the Liberal Party, now in power, to stick with the original plan of FTTH.
An allusion to the accumulated rounding bug that caused the Patriot missile incident in 1991? That system was likewise intended to operate for only short periods.
From https://autarkaw.wordpress.com/2008/06/02/round-off-errors-a...
In the Patriot missile, time was saved in a fixed point register that had a length of 24 bits. Since the internal clock of the system is measured every one-tenth of a second, 1/10 expressed in a 24 bit fixed point register is 0.0001100110011001100110011 (the exact value is 209715/2097152).
On the day of the mishap, the battery on the Patriot missile was left on for 100 consecutive hours, hence causing an inaccuracy of 9.5E-8x10x60x60x100=0.34 seconds.
The shift calculated in the range gate due to the error of 0.342 seconds was calculated as 687m. The shift of larger than 137m resulted in the Scud not being targeted and hence killing 28 Americans in the barracks of Saudi Arabia.