Ultimately a clear majority of the UK population is in favor of leave. This has increased since the vote as the lies of the remain campaign proved not to be true.
Much of the remain support is concentrated in a handful of London consistences meaning that almost all english MPs have a significant majority of leave supporters.
They have to vote for that or they won't be selected in a future election.
In any case. If the government doesn't follow the will of the people, then the 18 million people who votes leave will ensure that we have a new government one way or another.
> Ultimately a clear majority of the UK population is in favor of leave. This has increased since the vote as the lies of the remain campaign proved not to be true.
The problem is that the leave campaign won by lying and playing on people's fears, and when UK voters had a chance to reflect it turns out that was a stupid choice.
Cameron said he'd stay on as PM regardless of the outcome of the vote.
Cameron said he was on the fence and he'd choose how to campaign based on the results of his renegotiation. During campaigning he told the nation it'd be morally wrong to leave the EU, so obviously that was a lie.
Even 4 days before the vote, Cameron was saying the UK could stay in a reformed EU, despite Juncker saying simultaneously that there were no further concessions on offer and despite the Remain campaign choosing not to mention Cameron's "renegotiation" because it had achieved so little.
Osborne: Brexit will make every household exactly £4300 worse off. This figure was quickly dropped by the campaign because it was based on garbage calculations and focus groups showed nobody believed it (too high, too specific, inability to explain where it came from). Moreover it came from the Treasury which Osborne himself had slated as being unfixably politically biased when he first came to government.
Osborne: punishment budget, despite that he must have known that his best mate and political protector Cameron would quit rather than "do the hard shit", as he put it.
Turkey will never join the EU, despite the government's official position being that Turkey should join the EU.
The business uncertainty caused by an out vote will wreck the economy. Reality: all economic indicators not directly controlled by Mark Carney (i.e. devaluation of the currency) are doing fine.
There is no chance of an EU army. In reality an EU army was the very first thing the EU discussed after the Brexit vote.
etc etc etc. There were just tons of statements already proven to be lies thrown around by the pro EU camp, mostly coming from top politicians like the PM and Chancellor. The absolute blindness of EU supporters to this fact is remarkable.
o Leaving would give us 350 million pounds extra a week to improve healthcare
o We would have a strong currency
o The world would fall over it's self to give "free trade deals" to the UK
o UK tourists would be able to go to the EU without a visa
o No businesses would leave the UK to avoid trade tarrifs/customs hops
o The UK wouldn't loose service "passport rights" that allow banks, IT companies and service providers to provide services to the continent without extra taxes or hurdles
o that more good and secure jobs would be created, and globalisation would be rolled back
However the 350million is almost certainly not going to happen. Mainly because even if we managed to not pay a single pound more to the EU (thats a big if) that budget would be needed for farming subsidies, science, grants and weird legislature functions currently provided by the EU
I'd also wager that globalisation is not going to be rolled back, especially looking at the three pricks "running" the negotiations
The main ones were "I will trigger Article 50 the very next day" from David Cameron and George Osborne's threatened "punishment budget" where he tried to directly threaten the public into compliance by basically telling them he was going to take all their money.
There were also an awful lot of predictions of total, immediate economic armageddon. Up to and including telling us that the very fact we were having a vote was harming the economy massively.
What exactly are people using the linux shell for?
It's fun, but given that you can't easily access windows files and you can't use it to run things like the visual C++ compiler you can't easily use it to script windows programs.
What else can you use Bash for? Using apt-get to install your favorite dev tools, libraries, platforms and languages inc. node.js, Ruby, Python, gcc, go, Rust, htop, Apache, MySQL, Jekyll, etc., and then building and running your code with little/no change vs. running on Linux natively.
You absolutely can access files from the Windows filesystems easily. The C: drive is mounted at /mnt/c by default.
The ability to launch native Windows programs from within WSL is being delivered as part of the next update and is available on Insider builds now.
Bitcoin seems to be going through an issue at the moment where many transactions are taking hours to complete due to a large backlog. This might be transient but when I ask any questions about so what happens if volume doubles, I just get attacked for asking such questions.
It seems bitcoin just doesn't scale very well.
So my question is does ZCash?
How will it cope with current bitcoin volumes? Or 10 times? Or 1000?
People STILL confuse the construction of software with the construction of buildings. We can estimate fairly accurately how long it will take to build a building once we have reasonable plans for it. I can pretty accurately say that it will take about 4 minutes to build the software once I have the plans to build it. The compiler pretty much automates the whole job.
Writing software is NOT construction. Much of it isn't even design. Most of it is gathering detailed requirements and writing them down in unambiguous form (code).
My asking how long it's going to take to write a software it's like saying to a building contractor how long will it take to design every single detail of a city block including gathering all the requirements.
Also the requirements for software are much more detailed than building. 100000 lines of code represents 100000 decisions. I bet not many buildings have 100000 decisions. And 10000 is tiny for a software project.
The reason designing a building is faster is that there are fewer decisions. The reason there are fewer decisions is because buildings are way better understood than software.
A lot of designing the structure of a building is just the implementation of a few core concepts that have been perfected for thousands of years, like doors and windows and beams and arches.
There is sufficient human knowledge about buildings that people expect every single building to stand up, work properly, and be safe the very first time it is built. There aren't many self-taught building engineers who just picked it up in their spare time during high school.
And a lot of the design is simply choosing fixed options. Architecture and construction firms don't design and engineer the ceiling light fixtures or the faucets. They select a supplier + model, and install them with standard hardware in standard ways. In many cases there is even government code that tells them exactly how it must be done.
Compare to software. Every single decision is up for grabs on every new project. Programmers have no standard certifications, and in fact are often actively hostile to standardization and formal training. The state of the art in making sure things work is just brute-force, constant testing. Of course projects can run into problems.
Software, as an engineering discipline, still has a long way to go toward being a mature, well-understood human endeavor. Which makes sense, since we've been doing it about 1/100th as long as we've been making buildings.
> Programmers have no standard certifications, and in fact are often actively hostile to standardization and formal training.
I hold the unpopular opinion that we need those things. The current state of things is that we're not much more than kids hacking away on computers with minimal supervision.
I'd love to see most (if not all) software written with the rigor required to write safety-critical systems.
The problem, to add to the buildings metaphor, is that we don't have a few thousand years of experience building software under our collective belt yet. We quite literally don't know how to build good software. Or rather: we are still figuring out how to build good software, and we don't know yet how far along we are in this process. Therefore it looks like a questionable idea to set the things that we currently believe to be true in stone.
We're also never building the same software (or at least very rarely). The industry is such that we're always trying to invent something new, something that hasn't been done before.
If you build the same (or near same) piece of software 100 times, you can know almost exactly how long it'll take and you can do it quickly, same as if you were building 100 buildings. But we don't do that, because you build software once then just copy it 100 times.
You only build software if you're making something new that hasn't been made before.
> You only build software if you're making something new that hasn't been made before.
Or has been proprietary. This is why we should choose copyleft licenses over permissive ones.
This project I am working on is a bit boring, outsourced (or exploited) here in my country. It's about ETL, transferring data from prod db to analytics db.. I am of the opinion that all this is a solved problem and I am probably repeating mistakes. However, due to the nature of Capitalism, exploitation ....
I've been there. I even got a fair way into discussing starting a 'data migration' startup. Customers only care about getting their data migrated and wouldn't mind you holding the rights to the software tools you write to do it so you could get better and better as you build up your toolkit.
You'd have to have it mandated to everyone. Otherwise the few conscientious teams would take 10 times as long as the risky ones. And since riskiness often doesn't bear deadly fruit for months or years, the careful teams would never stand a chance in the free and ignorant market.
>The state of the art in making sure things work is just brute-force, constant testing.
I'm pretty sure the state of the art is using good type systems and libraries that use that type system. It takes some of the brute force out of testing. Unfortunately, many places don't even use the full power of the types their chosen language have. Others choose PHP or C++, piling on technical debt because they sell software the way HP sells printers - the initial payment is low, but you'll have to pay for support and bugfixes forever.
The only thing that can make software cheaper is the customers demanding better tools, languages and processes.
The reason there are fewer decisions is because buildings are way better understood than software.
I think buildings are better understood in large part because they are not nearly as malleable. Because software is highly malleable by nature, the complexity and scope of decisions can grow much faster than in any other engineering discipline.
I'm convinced malleability of the software isn't the issue; the issue is that the thing that the software is modeling is malleable and generally unknown to the level of detail necessary.
Nobody in a business knows all the business rules, no manager is aware of all the data that their underlings create, manipulate or consume, no individual or single location at FAA really knows all the rules on how traffic control actually works, etc etc etc.
But when we create software to automate any of those things then, at that point, need to have a full understanding of all those things. And then we generally also discover that the rules discovered are inconsistent, violate some other rules or laws, etc. And once those are fixed, then and only then people realize that what they had isn't quite what they wanted. And not only what they want changes, the whole underlying system was evolving at the same time, so it sometimes feel that you're starting again instead of tweaking the implementation (and this may also be why software methodology research focus obsessively on approaches that make things easy to change at a later date)
When someone builds a multistory building, they will build the first floor to support the second floor. And in general it is exceedingly obvious why that is the case.
The problem with software is the abstraction. The first floor and the second floor aren't connected in any physical fashion. It's easy to rip out the first floor after the second is build, and only later realize you cannot support the necessary load.
Buildings have a far smaller state space, and that space is highly decomposable. There are only so many doors in the building that can be open or closed, lights that can be on or off, HVAC zones that can be on or off, or elevators that can be going up or down or stopped at a floor, etc. And few of those states interact.
Software systems have astronomical numbers of states, and while most of the discipline of software engineering is about how to minimize unintended interactions between them while still producing the intended interactions, we still wind up with lots of the unintended kind.
There's a lot more to the state of a building than a human interfaces. When the ambient temperature changes, the state of the building changes. When the wind blows, the state of the building changes. When it rains, the state of the building changes. As materials age, the state of the building changes.
We're just so much more familiar with these forces and states that we can reliably model and design for them, and then (as with your comment) not worry about them anymore.
We take it for granted that our buildings won't fall down in a storm. But the knowledge of how to do that had to be developed and standardized at some point.
For buildings not collapsing, the main thing that matters is that the structure's strength is larger that the applied forces. Edge cases can be solved by adding more material, or more intelligently by having enough redundancy that you have enough strength even if a small number of components fail.
This, of course, does not apply to software functionality - you can't fix bugs by "more CPU power". However, if you look in the places where you can apply this methodology - like cloud services - you find that they are indeed very reliable.
If the materials of software are for-loops or text files, I think one can say that "we" are familiar with them.
A simple program or a simple piece of a large is something one can be very familiar with.
Indeed, the particular parts are more predictable than particular parts of buildings, who behavior changes over time, which literally rather than metaphorically wear-out, which have to simultaneously fulfill a number of functions simultaneously, etc.
So I think it is ultimately a matter of the state-space of the ingredients rather than a lack of familiarity with the materials.
I am not sure malleability is really the core issue... the core issue is that there are many different approaches to making a feature with different trade offs and costs that are not readily apparent at the outset. Not to mention that poor design upfront causes a plethora of issues down the line if you need to scale it.
This reminds me of a metaphor: "If you want to know the maximum load of a bridge, you don't drive progressively larger trucks over it until it collapses then rebuild the bridge."
I think you're right that thousands of years of experience play a part. But, overall, that metaphor has had me thinking a bit about how much less predictive building software can be compared to engineering and wondering why that's the case.
Assuming building software and traditional engineering are about as complex and assuming that engineering it is easier to predict (construction deadlines slip too,) I'm curious to see if we can overcome fundamental issues like the halting problem to become as predictive.
> If you want to know the maximum load of a bridge, you don't drive progressively larger trucks over it until it collapses then rebuild the bridge
For most of human history, we basically did this. Bridges have only become very reliable in the past 100 years or so. Before that, bridges collapsed very regularly and people were very wary about going over a newly built bridge.
I think the reason designing a building is faster is peoples' standards are lower.
Almost no one wants a building designed uniquely for their lifestyle. They don't even realize you could ask for such a thing. They just pick and choose from what they've already seen.
If that were true of software, it would be just as simple. But people keep asking for things no one has ever done before, exactly, and that leads to unpredictability. We keep seeing unique new software, so we are more likely to ask for the same.
The same is true for buildings when the architect is trying to do something new. Buildings could be just as interesting as software, but most people don't think to ask.
I think in the long term, buildings will be exactly as custom and complicated as software, and designing them will be just as difficult to estimate.
A lot of people hire architects precisely because they want a building designed for their lifestyle. No one goes to an architect and says "I'd like four walls, some rooms - I don't care what they do - and a roof. Can you do that?"
And the architect never says "Maybe. I wish I could be more specific, but it's just hard, you know?"
There's very little genuinely new in software. Even outside of the CRUD treadmill and corporate Java land, there isn't much of a leap between a Visual Basic application and an iPhone app. There are implementation and platform details, and lots of them. But the core concepts are recognisably similar.
The only real difference is that the tools keep changing - often for no good reason.
In architecture, stone is stone and concrete is concrete. In software, C++11 is not C++17, except for the bits that are, mostly, assuming you can find a toolchain that implements the differences properly.
Angular 1.0 is not Angular 2.0. Metal is not OpenGL, even though sometimes it smells like it. React is not jQuery is not a long list of other things, including Haskell, although you can bet someone somewhere is working on Category Theory as the definitive industry-changing conceptual model for MVC on web pages.
Most of the productivity costs associated with the constant churn are self-inflicted - the result of an industry more motivated by ADHD than by empirical analysis of which language and toolchain features make a real difference to getting shit done, and which are just unthinking tradition, random opinion, and noise.
From what I've been told, it's very rare for an architect to find a client who will let them actually creatively design a space for utility. Clients are primarily interested in appearance, surfaces, size, and to some extent layout. Very few will pay an architect to prototype new concepts, custom design amenities, etc.
Not that that's a good idea for most people... it's better for resale if you make a cookie cutter house. Most codebases will never be sold. Almost every house will.
If you had to resell codebases they'd probably be a lot more standardized.
That isn't the majority of people, though. Most of humanity lives in either apartments, cookie cutter developments, decades old homes they didn't build, or shanties. They don't get to choose, and customization is a luxury.
My brother builds large buildings and after a lot of discussion about our professions the key difference we agreed on was the level of constraint. Software is (appears?) unconstrained. The enormous cost of experiment or change is very much apparent to stakeholders in his projects. In mine there is always a sense that because it is intangible it doesn't have the same cost.
There is one consequence of considering code as software design that completely overwhelms all others. It is so important and so obvious that it is a total blind spot for most software organizations. This is the fact that software is cheap to build. It does not qualify as inexpensive; it is so cheap it is almost free. If source code is a software design, then actually building software is done by compilers and linkers.
But then you could argue that the machine code emitted from the compiler is a design and the actual hardware that implements it is "building the software".
But then you could argue that the implementing transistor arrangement is a design and the actual movement of electrons that implements it is "building the software"
I would bet that those involved in making buildings would differ.
However, sometimes you get to make more or less the exact same building again, which you shouldn't ever be doing in software. This is how suburbs happen; it's way cheaper to rebuild the same house again and again. In software, especially in an era of open source, you should not be doing that. So every house you're making, is the first house you've made of that type.
In software it's basically free if you want exactly the same building. Bits are cheap. Even physical media to store the bits is cheap.
Sure, the bricks, mortar, framing lumber, concrete, drywall, conduit and such for a building cost a fair amount of money. Then there's the labor to actually put it together. But one of the big costs is the architecture and (literal) groundwork.
The fixed costs of a building are fairly steep, but miniscule compared to redesigning it over and over. In software we generally get away with much cheaper fixed costs of deployment but redesign is still redesign.
We're typically happy to spend some time redesigning and paying that cost because we're not actually demolishing and rebuilding parts of the project with high deployment costs like renovating an existing building. Throwing away the old copy and deploying a new one is essentially free. So we shift the budget to more redesign. There should be a limit on how often, though, if we ever want to move on to different problems.
Several years ago I wrote a blog post expressing a related point, contrasting software design to construction. Here's a part:
Let me give an example: if you're designing a bridge, you can draw blueprints on paper which shows girders. The girders are described by giving their dimensions (accurate to 1/16th of an inch, say) and the particular alloy the girder is made from. This is sufficient to accurately model how that girder will behave under all kinds of different stress loads which is important for ensuring the bridge will be safe, and also to model how the girders will fit together like a puzzle which is important for allowing the steelworkers to build the bridge correctly, on-time, and on-budget.
The key to all of this is the fact that you don't need to create a real girder in order to test the design and make sure it's correct. A few easily described properties of the girder are sufficient; it doesn't matter where every atom goes, it doesn't matter if the surface isn't perfectly uniform, it doesn't matter if there is some rust, etc. Lots of the details just don't matter at design time, and most of them don't matter at construction time either.
Software just doesn't work this way. Software development languages are extremely detail-sensitive: get one letter wrong, one punctuation character in the wrong place or left out, and the software won't work right. There is no way to accurately model something this sensitive to detail without building it first, and if you have to build it first you lose the biggest benefit of doing design up-front: the ability to test and iterate on your design cheaply before committing to a full build of it.
It's reasonable to imagine that installing 1000 windows in a building could take weeks or months.
If you're writing software and follow DRY, it might take a few hours to work out how to perform some repeatable task, but then only a brief moment to actually do it 1000 times.
The act of making software is all about decisions, there are almost by definition very few repeatable tasks. If you do find yourself repeating things over again, you're not taking full advantage of DRY or automation.
This is why I think a lot of software is so unpredictable in terms of time. How do you predict that which you don't already know how to do?
Maybe not the best analogy but there's also things like letting 10x the projected number of occupants enter the building to make sure it can still handle the load and then if it can't this can lead to some rather drastic changes requiring rework.
Also things like having a contingency plan if say city sewer gets backed up - how will your tenants take care of "business" then :)
Well, we still need you to provide a detailed task breakdown and hourly estimate for each task so if you can do that before starting work on your story backlog, that would be terrific.
Underestimated in this comparison is the fact that software development, even on very large projects, tends to be staffed by generalists, while building construction relies on many highly-specialized masters of a trade.
Software has some of that specialization (for instance, even big projects don't try to write new operating systems, invent interoperability protocols, or graphics libraries). But when it comes to the boundaries of what's considered part of the project, we rely on generalists. In building construction, a general contractor may have an HVAC subcontractor, one for electricity, one for glass, one for landscaping, one for every subsystem. In software, it's not economically feasible to contract in someone specializing in, say, web routes, and another in Rails model development, and another in for-loops, etc.
The other, related point: very often software development time isn't just a function of the requirements, it's a function of the intersection of the technical requirements/platform and the talent pool available for that platform/language/tech.
Another point in support of this is that buildings tend to get away with more minor bugs than software does. Walls usually have a subtle bow to them, toilets don't have to fail gracefully if the sewage network gets backed up, doors get installed the wrong direction, there are gaps behind the cabinets, light switches get wired around odd corners, etc.
So long as the building can stand up in an earthquake and looks good enough to sell, that's usually the end of it.
Software, on the other hand, has an air of imperative perfection. If it is even slightly wrong or ugly, it must be entirely wrong. I feel that way myself, and I must sometimes throttle my impulse to fix it indefinitely so that I can meet a business need in time.
In construction the hardware part is pretty much standardized. Very little new hardware is coming up. But in software development, hardware is still evolving very fast (processor, memory, bus, fibre etc..). So software has to evolve along with hardware. Once hardware is standardized like CPU instruction set, then software will also be standardized with out any further significant development.
Meanwhile, lately it seems like all too often, you'll be pounding nails with a hammer and then in between swings, it suddenly turns into paint brush or a screwdriver or an Allen wrench.
Or you're working along one day, and all of a sudden, every machine screw in every piece of power equipment on the job site just up and disappears into thin air, and all of that equipment shakes itself apart into a maelstrom of shrapnel, because that little hidden component nobody thought about winked out of existence.
Construction workers would not put up with this bullshit.
Yeah. Software projects that are as complex as a house are typically called "the script that guy wrote a decade ago", and tend to work fairly well, even considering that use-cases for software have changed rapidly in the last few decades.
Yeah sure.
But it's not a good solution really. First of all I don't really want to pay any more.
Secondly, if everyone has to do this in order to get their transactions processed in a reasonable time, then nobody is any better off..
That's why it works. Some people are willing to pay a lot more for some of their transactions, and some transactions just can't justify a high fee.
The block size is a security parameter. You increase it, you decrease security. And it's also more of an engineering limitation than most people realize at first appraisal.
We use fees because we don't have better ways to allow more transactions. And it's an active field of study, but most of the gains so far have been very small.
As others have said "mining" is really not a good name for what is happening. What the miners are really doing is the work that is necessary to confirm and secure the transactions.
When they do that, if they are the first miner to come up with the values required to confirm the transaction they are allowed to add a transaction of their own which "transfers" some bitcoins from "nowhere" to them, thus getting rewarded for their work. it is this that incentivizes people to "mine" but it's not the purpose.
I've had transactions waiting for 8 hours now for a confimation. And some more new ones this morning for 3 hours now without a single confirmation.
All of these contains the default fees added by the clients and which were sufficient last month.
I was trying to demonstrate bitcoin to some friends last night. They made wallets and I tried to transfer some coin to them. After an hour they were saying it would have been quicker to drive to the ATM and get cash. I doubt they'll look at bitcoin again now.
This needs fixing and this needs fixing quickly. Waiting half an hour for a few confirmations used to be a problem that needed fixing. Waiting hours will quickly kill bitcoin.