So, I saw something similar in semiconductor manufacturing in the runup to Y2K. Lots of equipment that needed replacing, could not normally get replaced because upper management didn't want to pay for replacing old equipment, then wanted to spend only on shiny new fabs. However, Y2K was something that shareholders cared about. So equipment vendors realized that if they said, "no that old equipment cannot be Y2K fixed, however we can sell you an upgrade to it that is Y2K compliant", then the engineers at old fabs could get some new equipment finally. The engineers were happy to get newer equipment occasionally, the vendors were happy to sell new equipment (or at least more substantial upgrades) to old fabs as well as the new ones, and even management was ok with it because no one got fired for doing Y2K upgrades. It all worked well until Y2K actually came and went and then you couldn't use that trick to get old equipment upgraded anymore.
With a couple of bad metaphors you can prove almost anything. But anyway, the point is blockchains (I guess).
>Consumers actually like transactions to be reversible, within reason; markets work better that way. Companies even like to be able to safely unwind legal agreements sometimes when it turns out those contracts weren't the best idea.
Reversing financial transactions or contractual agreements in a way that eradicates all traces of their existence is almost never what you want and it is almost never legal.
If there is one truly append-only world out there then it is finance and legal. I'm sure there are some exceptions, but it makes little sense to build an argument on those.
I do agree though that blockchains seem like a rather contrived solution to some of the problems they are supposed to solve.
I still struggle to think of a blockchain application in the logistics and supply chain domain that goes beyond trused digital customs docs and would work. Maybe I lack imagination, but sometimes it reminds me of the RFID hype back the day, only that with blockchain being digital, as compared to an RFID chip, it attracts a lot more buzzword hunters.
> We can do consensus in many (much cheaper) ways.
Agreed, but that's why we're working on proof of stake systems. With BFT algorithms, consensus can also be much faster, on the order of 1 second.
> Most people don't want their transactions or legal agreements published to the world. [...] And [companies] rarely want the public to know about their contracts, let alone their inventory details.
Also agreed, but that's why we're working on systems with better privacy, using tools like ring signatures (Monero), zk-SNARKs (ZeroCash), Bulletproofs (future Monero?), and zk-STARKs.
Private smart contracts are tough, especially if random access is needed, but it's doable with a scheme like TinyRAM.
I'm not disagreeing, but there's a case for simplistic config via JavaScript object literals in native JavaScript apps (node.js and browser apps). Of course, once you confine yourself to the JSON subset of JavaScript object literals, you'll soon find yourself wanting commenting syntax and other things. What I don't get is people using JSON religiously in apps that are neither using JavaScript, nor provide browser-facing web services. For example, in Java you'd need to do the exact same things for JSON serialization that you'd be doing for XML serialization (eg. jackson/jackson-xml or JAXB annotations on POJOs).
The case for XML as service payload IMHO is when you need an upfront design with schemas and all for multi-party or vertical data exchange (such as for payment providers to stick to TFA's topic).
The real difference between XML and JSON is that XML is not directly serializable to a data structure because it's too "rich", with both tag attributes and values. For serializing data DOM, XPath etc are really not the right fit.
I agree. SGML/XML was invented for representing and editing semistructured text, not as generic data serialization format. In markup text, attributes are for data that isn't to be rendered itself, but for specifying how rendering should be performed and other things (how CSS fans thought it would be a good idea to make up an entire new ad-hoc language for this purpose, then reclaiming "HTML is for content not presentation" is another discussion). But the NoXML/JSON folks haven't really come up with a widely used alternative to eg. XML Schema (which isn't surprising since JavaScript isn't statically typed). The majority of the work where standardization of message payloads is useful (such as for parties in e-Commerce) has already been carried out. The world hasn't and couldn't have waited for JSON to become a viable alternative. Besides, what makes JSON so practical - that it is schema-less - is exactly the reason it isn't a good fit for describing upfront protocols for long-term multi-party data exchange.
> JSON & CSV, whilst undoubtedly simpler formats, still lack the 'power' of the XML ecosystem.
"Power" is often a compromise. There are convincing arguments that plain text is, after all, more powerful than JSON, CSV and XML. It all depends on the context where this power will be exerted.
If you only need to store name-value pairs, it may be better (e.g., more robust, more powerful) to use
scanf("%s=%d", &name, &value);
than to depend on a particular json or xml implementation.
Admittedly, I'm not really getting the main argument (that blockchain is like XML in that it's overused, yet this is a good thing because it will bring collateral progress?), but the bump on the broad misunderstanding of HTML, SGML, and XML which has been ongoing since 2009 (and actually 1998) is worth reading still.
One important difference is that XML was never anything like the "get rich quick" scheme that Blockchain and Bitcoin are today, so it didn't attract such a huge number of charlatans and scammers. It was more of a "get shit done quick" scheme, but it never attracted scum like Brock Pierce and his ilk, the way Bitcoin does.
1. Blockchain is being touted as a panacea, a solution to every problem, not just every data integration problem. This has led to all sorts of companies offering solutions like retirment homes on the blockchain, massage chairs on the blockchain, etc. etc. The scope of the claimed magical properties of XML were never anywhere near as grandiose. I suspect this may lead to a stronger backlash against blockchain.
2. Blockchain is tainted, rightly or wrongly, by its association with cryptocurrencies, and all the negative connotations that brings. XML never had any association with large scale fraudulent activities. I can't imagine that being helpful for continued large scale future investment.
As an aside, it seems some of the more serious formerly "blockchain" projects appear to be rebranding themselves as "distributed ledger technology" projects, perhaps to try to retain credibility.
If XHTML had won you would not have had a bunch of web sites that wouldn't render. You would have had browsers shipping both the old HTML parsers and the XHTML parser, and attempting to send artisanal sites through the old HTML parser before failing. Hell, you probably would have still had Hixie's unifying algorithm for parsing the old HTML consistently across browsers. Because consistency would have still been a problem for the billions of sites that weren't XHTML.
If people had adopted Bitcoin instead of the old technology "X"-- for any "X" I can think of-- you'd end up with a vastly inferior product and a network that probably performs several orders of magnitude slower than how "X" currently performs. The only exceptions I can think of for "X" atm are funding Wikileaks and funding Scihub.
This this this. If anything bitcoin is the new HTML5, though even that analogy isn't great.
- it's hugely popular mostly on the basis of hype
- the technological advantages over predecessors is overblown
The analogy isn't great though because HTML5 largely succeeded on the back of scaremongering the flaws of its predecessor, rather than bitcoin's success being on the back of wild claims about its appropriateness as a technical solution in so many applications. In reality, the two histories aren't comparable.
The html aspects of html5 have been hugely overblown, but css3 and the totality of javascript apis that have spotted under the html5 banner have been revolutionary.
“Works best in IE” has been replaced with either “works best in Google Chrome” or “works best when you allow us to assault your computer with every tracking script under the Sun”, usually represented by a blank page when JS isn’t enabled, and especially notable when you were probably only expecting text and images.
The revolution has been glorious. Can’t wait for the next one.
I agree with author prediction, however I propose another solution: Plan9. It's devs foreseen today's problem and start to solve them, far better than actual partial solutions.
Oh, of course, Plan9 imply user independence and sovereignty a bad things for today's "industry" so Plan9 solution rest buried in the past.
Plan9 sure is a marvel of engineering because all interactions amount to reading/writing a specific file, but it doesn't solve the issue that is interoperability: it merely defines a way to connect to the remote service, but it doesn't say what the functional interface is.
- what "files" should I read from or write to ?
- what format should I use ?
- when should I interact with said files ? Is it ok to tail -f it or should I reopen it every time I want to get some new information ?
XML in itself didn't solve anything, it's the standards over how to define a grammar and validate it that made all the interoperability.
IMVHO there is a unnoticed problem underneath, probably best explained by XKcd standard's vignette (927), we will NEVER really accept for long time "proposed solution" but we can converge by nature to a common solution if we find a comfortable common way to move.
Plan9 for me is an answer because yes, it does not solve problems you rightly pointed out, but instead offer a common ground to reduce them by nature. It offer a dramatically simple and effective "networked" solution so a good ground on top of which we can develop a future of IT and so future interoperable solutions.
Emails do the same, when they start, which means when we really have a diversity/interoperability problem in communications: offer a means to develop a common ground on top instead offer a "common solution to be adopted".
Every single product change, but needs tend to remain the same, Greenspun's tenth rule of programming perhaps is another example that tell the same thing better.
I hope to be clear, English is not my motherlanguage and so express "philosophical" concept with it it's not easy for me...
Oh come on. Shoehorning EVERYTHING into a one dimensional array of characters is a terrible idea. That is why Bourne Shell is a piece of shit, and PowerShell rocks.
Windows PowerShell? That's abominable piece of managerial-driven crap? Well... It rocks, in the sense of rock falling on the genitals of it's users...
Sorry for being rude but well... There are many aspects of ancient OSes that does not work well, mostly because time passes and no one really advance BUT if you want a good comparison try provisioning Windows with PS and NixOS with NixOps, after a month tell the difference.
I do not understand what you say... Yes, I use in the past GNU autotools and I see the enormous amount of text they produce, not much different by the amount of crap any RAD IDE produce alongside a project but you talking about PowerShell, an Windows IaC solution, not a software development tool to help portability. I'm respond: compare it to another GNU/Linux IaC solution, NixOS/NixOps.
If you compare them you may notice:
- they are both built-in in their respective OSes
- they are both a language and a framework to configure/deploy/provisioning systems, local and in LAN
- they have both a DSL for that
Well, NixOS realize easily a semi-unbreackable OS you can easily replicate anywhere, PS realize a MONSTER that only to learn in you need weeks, and even after so you can't really replicate Windows, you can at maximum provision an already deployed Windows instance. I do not see how GNU Autotools fit in that comparison.
I can't really make out what you're trying to say, either.
I'm asking you if you think that file oriented shells like sh, csh or bash (or any Unix shell short of Perl, Python, Emacs or node.js) are reasonable scripting languages for plugging together software components, the way PowerShell is.
For example, PowerShell lets you pass around an array of filenames, and doesn't blow up if there are spaces in the file names the way the shell does. PowerShell enables software components to export and consume rich well defined APIs that take and return actual objects and data structures, not just strings of text that you then have to parse on your own.
Even though it would be extremely useful, Unix shells are absolutely terrible at parsing text and manipulating data structures, and have to call out to other weak programs like sed and awk to do even the most rudimentary parsing, pattern matching and string manipulation.
Historically and until recently, Unix shells weren't even able to add two numbers: you actually had to run another program called "expr" to do that, and forking another process and parsing its string output is ASTRONOMICALLY less efficient that executing an ADD instruction. Unix shell scripting is pathetically wasteful and inefficient, and a huge pain in the ass to program, read and maintain.
The following is an example involving boolean expressions ( |- or operator ):
$ expr length "abcdef" "<" 5 "|" 15 - 4 ">" 8
output:
1
Everything is NOT a file. You shouldn't have to fork another process and parse a file to add two numbers, or put double quotes around your logical operators for that matter. Plan 9 got it dead wrong. And that's why nobody uses Plan 9, and why modern Unix isn't anything like it.
I get it that you're in a reflexive tizzy about how evil and corporate everything from Microsoft automatically is in your book. But I'm afraid you simply don't know what you're talking about.
Since you're not able to talk about PowerShell rationally, then what about Python or Emacs? They're real programming languages, not lame mish-mashes of haphazard syntax and inconsistent semantics like Unix shells, and they don't view everything as a one-dimensional file like Plan 9 insists. They have actual libraries with real APIs and honest to god objects and data structures, not just files. In Emacs Lisp, files actually have their own local variables, containing any Lisp objects or s-expressions.
And if you really continue to insist that everything should be represented as one-dimensional file, then how to you explain ioctl and fcntl and setsockopt?
The official error code returned by ioctl sums up my point about Plan 9's fatal mistake: ENOTTY: NOT A TYPEWRITER.
This is wrong in almost every detail, and superficial where not wrong. But the underlying point, that we get useful things out of hype cycles even if they’re essentially side-effects is true. I feel like there’s more skepticism about blockchain than there was about XML, so maybe it won’t have the same sorts of impact. On the other hand, maybe there’ll be a blogger writing pseudo-history in 15 years arguing that we got Git’s versioning system because of blockchain, so that’s nice.
Sure. XML is not in any way an evolution of HTML. XML DTDs are there because one of the original requirements was backwards compatibility with SGML. You can’t (well, really shouldn’t) parse an XML document without the DTD if it has one, because, thanks to entities, parts of the document might be in the DTD. There wasn’t just one standards committee behind the whole W3C XML activity, there were many. Some of them did indeed go out of control, but it’s not fair to label them all that way.
JSON hasn’t, and won’t kill XML. it’s a simpler solution for most of the things XML ended up being used for, but it’s pretty bad outside its comfort zone. Companies that use it aren’t necessarily dinosaurs.
All that said, the author isn’t totally off base, just enough to annoy me on a Saturday morning :-).
The problem is that author lives in his own naive little worlds and lacks any kind of imagination to understand what life might be like on other worlds.
I mean, really, you've got to be special kind of clueless to think the problem XML solves is syntax. Oh yes, it was all about avoiding having to write more parsers. Except that makes zero sense because each XML vocabulary does require writing custom code to process it or transform it and that custom code is just another high-level parser.
The key problem that XML solves, BTW, is namespaces. It is a mark-up language, after all, that lets people assign unique names to data in a document. Anybody who actually works on systems that involve multiple enterprises grasps this quite quickly. (Unfortunately we don't all do Web CRUD.)
As for blockchains the author is similarly misguided. Blockchains do not solve centralization. Systems like Bitcoin are centralization engines, they strongly incentive centralization at virtually every level, on a planetary scale. The blockchain platforms that show the most promise in real-world use are actually permissioned blockchains that introduce a central authority in order to prevent the race-to-the-bottom centralization incentives of Bitcoin and Eth.
Ironically, the author is correct that blockchains and XML are linked. Permissioned blockchains like eg Corda [1] work so well because they lower transaction costs across business. Blockchains at this level solve the biggest problem there is in business, the Auditing Problem (sometimes called the "Trust Problem" but this is a bad name because the problem is that businesses can't and never will trust other businesses).
I could go on but this sort of reddit-style ranting about technologies that the author clearly doesn't understand isn't valuable or productive. There are interesting problems and costs that XML and blockchains introduce but you're not going to find that here. This is just a dead end.
Our parents built the internet, our generations turned it into something useful for the masses. I believe we are witnessing the same situation. Our generation created the blockchain but it will be the coming generations and the progress of technology which will find utility.
The blockchain doesn't "belong" to our generation just like the internet didn't "belong" to our parent's generation even though they created it... #Blockchain #cryptocurrency
We have too many endless debates about whether it's good or bad, a scam, to energy intensive, too much skepticism, too many scars from believing in media-hyped utopias or dystopias just to see them fail. The coming generations will adopt it as if it's the most natural thing in the world and find ways to utilize it we never thought off because they will be defining what has inherent value to them. If you don't, believe that you have never experienced how the art market functions or never heard kids discuss the latest skin or emote in Fortnite...
Culture drives what we consider inherent valuable, not pure logic or academia or Adam Smith. The blockchain is made for the digital culture and what will be considered valuable (and thus per definition scarce) within that domain for reasons still to be seen.
The blockchain is a protocol, protocols are infrastructure. The primary value of the blockchain protocol is its ability to remember which is what gives it it's "physical/atomic" properties. It's not decentralization as many believe as that was already solved with the TCP/IP protocol.
So you can say it's like TCP/IP with a memory.
This also means that it's much harder to take things from the physical world and apply the blockchain as that can still be compromised.
So instead of where it will shine is in things that are created in the digital space. Crypto Kitties, Money, Digital Pokemon Cards, Documentation, Identity, Marketplaces with digital assets etc.
Everything that has inception in the digital world.
Yeah, blockchains, whatever. Cryptocurrencies are here to stay and if you think bitcoin is the xml of crypto, you’re going to look exponentially dumb in the coming years. Good luck with that.
>> Blockchains solve centralization, which will turn out not to be the problem.
Centralization is a huge problem. Centralization means that lots of smart, hard working people essentially have to wait in line (sometimes forever) to get access to capital in order to implement their ideas. Capital is so centralized that decision makers don't have the mental bandwidth to invest it effectively so most of it gets absorbed by useless institutions who do useless stuff.
From 1995 to 2002 I led biz dev for DataChannel's XML parser [ the first written, most widely licensed to other platforms eg MSFT, and likely most widely adopted ]. I coordinated our participation in XML & all the related web services standards bodies.
We got to participate in a lot of amazing projects, whether relatively simple open standards for DTDs that made it easy to interchange crisply [ eg MathML, many others ], or for industries with considerable interchange challenges [ eg SWIFT ]. These projects were typically fairly lightweight, and fortunately many simply 'worked' and succeeded. In this capacity XML seemed to "just work" and seemed to make the world a better place -- "victory!"
Of course you can't really discuss XML without discussing "Web Services," which is what many consider the "xml ecosystem."
Around 1996 Norbert Mikula, Mike Dierken, John Tigue, myself and probably a few other characters began riffing on various ideas of how XML + HTTP could be used. What if you could simply lookup a signed DTD for how your data was suppose to be delivered to you? Lookup a signed DTD for how to invoke its API / RPC calls? Hell, why not have a DNS-like directory of what services are available, whether on your intranet, or from vendors? That's where the original 'whitepages / yellowpages / greenpages' naming convention in the very earliest days of web services came from btw. Naive? Over-optimistic? Absolutely... But we didn't know any better, so why not. We started discussing it with lots of smart people in SGML and XML space, the concepts started turning into prototypes and it started to build momentum.
People who were interested fairly wisely said "a standards body should lead this stuff [ instead of some vendor ]," so the early work landed inside OASIS [ the original SGML standards gangsters ]. Their wisdom gleened from long years building incredibly complex SGML doc systems spared the early XML services from many potential debacles btw. Momentum began to build, and super interesting things started being built.
As Fortune500 IT, startups, middleware vendors and app servers became increasingly interested in tapping in this magic a very interesting dynamic changed: the platform players perceived all this interop, open standards as a serious competitive threat.
Microsoft and IBM in particular got very involved very quickly, and the once simple & elegant concepts quickly devolved into multiple competing standards that were a horrible, illogical, impractical mess. This happened in less than two years... Open-standards based web services essentially became an irrational choice vs just "staying with your existing vendor's stack..."
That glimmer of amazing potential magic was quickly and effectively killed. Am I saying XML & web services were perfect? of course not. I am saying a thread of amazing potential was pulled, then around 1999 it was cut. And it was sad to see first hand.
I hope GraphQL, JSON, the incredible variety of cloud services and other interesting bits tap into that same magic.
Blockchain... imho interesting, but I fail to see many use cases where business side buyers [ aka budgets ] must have a blockchain-based solution. Looks like tech still searching for need and product/mrkt fit to me... think long-term it offers many areas an interesting evolutionary step, but I don't see it as hugely revolutionary gamechanging stuff.