Hacker News new | past | comments | ask | show | jobs | submit login

This is why I find PowerShell to be an answer to a question nobody asked. If bash/zsh isn't cutting it for you, drop into something like Python, Ruby or Perl to do the heavy lifting.

PowerShell will never accumulate a library as comprehensive as what any of those three have, each has had decades to accumulate packages of all kinds, and more are still being added.

It's odd, but not surprising given their history of "Not Invented Here", that Microsoft never even considered using Python or Ruby when they both have .Net implementations. Their enthusiasm for JavaScript wasn't sufficient for them to embrace that either.




[Disclaimer: I work for Microsoft.]

As far as I recall, the original impetus behind what became PowerShell (as handed down to us in a conference room by Ballmer himself, sometime around 2001) was to fill the gap between ops people who did enterprise administration manually with tools like MMC and engineers who automated enterprise administration with tools like C++/DCOM. The latter were necessary in a lot of cases, but they were expensive, and we needed to give the industry a way to do powerful automation without hiring a bunch of PhDs. So, yes, someone did ask for it - the IT industry.


I was part of the Hotmail team which consisted mainly of Solaris admins trying to mass administer a giant number of Windows servers. We kept pleading with the Windows team and upper management (forgot who it was at the time, Raikes?) to give us a powerful shell and ssh on Windows. We ended up licensing FSecure's ssh daemon for windows. We also used cygwin, too.


>forgot who it was at the time, Raikes?

It was Allchin when I was there... about that era.


> So, yes, someone did ask for it - the IT industry.

... who couldn't use the well-known mature shell tools since they were locked into a closed platform, and Windows' cmd.exe was comparatively very weak.


It wasn't cmd that was the problem (although, it is weak); it was how Windows exposed APIs to allow automated solutions.

You had two options: VBScript (which is a glorified COM+ agent) or a "real" programming language like VB6/C# which mixed COM+ with Win32 API calls (which are too hard to use for most non-programmers).

Powershell within itself does improve CMD. But that isn't primarily how PS pushed Windows automation forward, Microsoft said that every major server feature and server function should work in Powershell, so engineers at Microsoft had to look at it (be it their lacking COM+ interface, provide a new PS interface, or something else).

The net result isn't that CMD got replaced by something better. It is that we have a lot more APIs and ways of interacting with Windows features and services than we ever used to.

The ironic thing is that today you could write MS Dos-style console applications that work in CMD which provide all of this new automation functionality because of the Powershell push. Back before Powershell you'd have to hack it using unsupported APIs, registry entries and similar.


> Powershell within itself does improve CMD.

Kind of depends on how you define improved. I find powershell slow enough that I usually still use cmd unless I'm dealing with a powershell script.


If all this API effort had gone into a real cross-platform language (Python, Ruby, or even Perl which was king in 2001), Windows would have flourished. They went for their traditional lock-in approach instead, so now they have to catch up.


Windows did flourish...

And are you suggesting that Perl would have been a better solution for a Windows interactive shell and automation tool?

Are you going for the more readable argument? Or the more strongly typed argument?


> Windows did flourish

Late reply, sorry. I don't think we're on the same page: Windows sold relatively ok, basically doing tying the battle for the Enterprise sector, but in the meantime it lost the cloud war and was literally wiped out of several other segments (mobile, web etc). But don't take it from me: if everything were hunky-dory, they wouldn't be busy steering like crazy towards opensource, something that was unthinkable 15 years ago.

> are you suggesting that Perl would have been a better solution for a Windows interactive shell and automation tool?

From a mindshare/commercial perspective, yes it would have been (and I say this as a complete Perl-hater). By starting from scratch, MS then had to spend a decade evangelising their new solution, with mixed results. It would have been tremendously easier to just offer first-rate support for established sysadmin tools (ssh, perl/python etc) from day 1; but again, don't take it from me: the big new Windows Server features are Ubuntu-bundling and native ssh support. This would not be necessary, were Powershell and Windows dominant from a commercial perspective.


> Windows did flourish...

Sure. Just see all those buildings full of Windows servers running Facebook, Google, Twitter, Amazon...


It's moving the goal posts a bit to suddenly restrict this discussion to only servers, no?

There are plenty of Windows servers running plenty of businesses. They may not outnumber non-Windows servers, but I don't think "majority" has to be a prerequisite for "flourish".

There are also buildings full of Windows clients...


This discussion is about PowerShell, which is primarily intended to automate server management and has been ported to Linux, presumably, for the same purpose.


> This discussion is about PowerShell, which is primarily intended to automate server management

Many (me included) find PowerShell extremely useful for automating any sort of Windows management, client or server. So I disagree that the discussion should be implicitly assumed to be limited to Windows servers.


I was administering windows computers at a university at the time of windows 98and most of my automation came from ActiveX where I could register external libraries and script most OS interfaces with vb or js. Powershell is just that plus lots of objects and types


No. The well-known mature shell tools manage files and processes - neither of which are a focus of managing a Windows system. Windows revolves around services which you RPC to, which internally store their state - attempting to faff with their state from outside the service is almost always both undocumented and likely to cause failure.

Powershell allowed those services to standardise their RPCs in such a way that sysadmins could call them easily. Alternatively, you'd have COM or a variety of other RPC mechanisms, depending on the service, often undocumented themselves as they were only ever intended to be used by a GUI packaged with the service.


It's been years since I used Windows APIs -- back then there was OLE Automation, which allowed dynamically, late-bound (call by name, really) RPC calls into COM from scripting languages. Has this been superceded by something else? Using IDL-defined COM interfaces from scripts sounds like something nobody would want to do.


Indeed, nobody would want to do that, which is why Powershell exists. Scripting VB or Javascript against COM or whatever else the software vendor decided to use was generally an awful experience, if the interface was even documented at all. Powershell replaces all of that for system administration purposes - you can even run Powershell commands easily from within any .Net application and get structured, object-oriented output, which is what at least all the Windows Server GUI stuff does now.

You're still stuck with OLE/COM for e.g. Excel automation, though, I think.


I'm having to do this on a project right now, and I wholeheartedly agree. Nobody would want to do this.


Well, lots of windows admins used things like cygwin and the like to get a unix-y working environment, but PowerShell offers a much more robust set of mechanisms to manipulate the underlying Windows operating system.


> who couldn't use the well-known mature shell tools since they were locked into a closed platform

Yeah, I remember the bad old days, when I'd download that CygWin installer and Windows would just give me the old skull and crossbones and say "You're not allowed to install GNU tools, Sucka!" One time I jailbroke XP and got it installed, but then CMD was all "You can't run grep.exe and sed.exe under CMD, Sucka! Not allowed!" I was so sad that I couldn't use those tools on Windows, so sad.


> a way to do powerful automation without hiring a bunch of PhDs.

I remember writing Windows software with DCOM was difficult, but it was more on the painfulness side than a difficult intellectual endeavor. Sure you didn't need PhDs to automate things.

> So, yes, someone did ask for it - the IT industry.

The part of it stuck in Windows, that is. Vendor lock-in is a powerful thing.


Yeah, DCOM was a task of fillout the correct fields with the correct values, and try again until you get it right. Anyone with passing C++ knowledge could do it through shear brute force.

It's one of the things which made me never want to program on Windows again.


I ported a bunch of COM to Linux in the late 1990's. Not DCOM with the full RPC, but just in-process COM. I had .so shared libs with DllMain and DllCanUnload now in them, I had CoCreateClass, I had a significant part of the registry API implemented in terms of a text file store (HKEY_LOCAL_USER being in the home directory, HKEY_LOCAL_MACHINE somewhere in /etc, ...) and such. IUnknown, QueryInterface, AddRef, Release, ...


Netscape did that with XPCOM [1] they once thought that this is a good idea - i guess for versioning of interfaces; nowadays firefox has moved on from this.

[1] https://en.wikipedia.org/wiki/XPCOM


Haha, funny enough our company did this as well - we had lots of legacy C++ code using COM and wanted to port to Linux. It was not that hard after all.


That's nice, but having CORBA implementations - why?


To port some code as-is.


I read this a while back. It's an interview with the dude who wrote PS https://www.petri.com/jeffrey-snover-powershell


His HN handle is JeffreySnover; He is commenting on this story at this moment.


Not "The IT Industry", surely only a subset of Windows systems managers.


That's actually a pretty substantial chunk of "the IT Industry" though. And if you're the people who make Windows, it's not surprising that it's a big enough chunk to do something about.


I personally find that the term "IT" has become a bit of a code word for "Enterprise Wintel", and that anyone working outside of the gravitational pull of Microsoft prefers to describe themselves as working in "tech"


I think that IT has become a code word for being a tech worker in a company, rather than a worker in a tech company.


All those sysadmins would have been perfectly happy with just a properly working unix like shell.


Yes. Like all those early car buyers would be perfectly happy with faster horses.


Both PS and bash are horses actually.


Unix like shell + Python/Perl/Ruby = car

That being said I really don't mind that MS went the PowerShell route. It improved their ecosystem a lot and I belive was also one of the engines behind the more open minded move towards open source...maybe someone at MS can chime in but I feel like PS was always one of the strong projects pushing that forward from talking to some MS engineers.


false equivalency


No, actually it's quite apt.

PowerShell, or at least the concept behind PowerShel, is an improvement/superset over the Unix shells -- and can do whatever the shells can do, plus stuff they cannot do, because they don't support typed entities.

Its problems are not technical or conceptual -- they are ecosystem and historical: lack of compatible tools, windows-only, etc.


Or to the point of the original parent: the industry considers powershell a solution in search of a problem.


Only those that disregard the inventions from Xerox and ETHZ.

Powershell ideas go back to how REPLs work in those environments.


And everybody remembers how Ford was forced to reintroduce a carriage with a horse a couple of decades later because nobody was really using those horseless carriages. Right?

https://blogs.windows.com/buildingapps/2016/03/30/run-bash-o...


I feel very similarly, but then I'm of the mind that if I can't apt-get it, I'm not interested, and I'll bet that "open source" doesn't mean to MS what it means to the open source world.

As for the NIH syndrome, one of my favorite Paul Graham essays is still "What Languages Fix" (http://www.paulgraham.com/fix.html) particularly for C#: the problem that C# was invented to solve is that Microsoft doesn't control Java.


C# incidentally solved many other shortcomings with Java. The languages are getting closer to parity, but for many years C# was pretty far ahead (if you only consider language features and not tooling/ecosystem). Between reified generics, better native interop, and many functional features, C# felt and feels more concise than idiomatic Java. And I've written plenty of both.

Disclaimer: MSFT employee expressing personal views


In what way was C#'s tooling not better?


Java seemed to have a lot more options for deployments and monitoring for a long time. More advanced 3rd party libraries in general. It had simply been around longer and thus had those tools available. Now that C# has been standard for a while almost all of those tools have a C# equivalent.

As a .NET dev I'd actually argue that the things available to C# now are actually better (especially Visual Studio), but that's certainly biased.


> ...things available to C# now are actually better (especially Visual Studio), but that's certainly biased.

You call Visual Studio good? Have you actually used it?

VS2015 is painfully slow, and yes, I've installed all of the updates and I have no plugins or such. Heck, even Eclipse feels zippy compared to it. Something as simple as finding next text search match takes about a second. Changing to a different source code tab is also sluggish.

Context sensitive search (right click identifier -> Find All References) is useless for anything that is even a bit generic. Instead of actually analyzing context, it seems to simply grep all reachable files for a text string. Including SDK headers...

I use VS2015 only for C++, maybe that can have something to do with it. Maybe it works better for C#?

I also dislike how I have to login to use an IDE. Once it cost me 30 minutes of working time, I couldn't login to a god damn IDE that had an expired license. That's a great way to make developers hate your product and ecosystem.


It is a lot better for C#. All the features work perfectly for .NET languages, but because of the nature of C++ it's a lot harder to get these kinds of things working well with it. Some answers about why are here: https://www.quora.com/Why-is-C++-intellisense-so-bad-in-Visu...

Eclipse can run better on a potato but it's also very simplistic. You should also be able to turn off most of the features you don't want in VS.


Well, heck even Emacs with some LLVM wrappers can do better job at C++ Intellisense than Visual Studio. Eclipse's CDT runs circles around it as well (so much for being very simplistic). And don't get me started on trying to use Intellisense on large codebases - such as Unreal Engine's source code. It is just completely unusable (30+ seconds before the completion pops up, often blocking your IDE (a total usability no-no) with a modal "Intellisense operation in progress ...").

Also anything using complex templates (= pretty much any C++ code these days) throws it for a loop - making it useless where it would be needed the most.

And don't even get me started at the ridiculous state of refactoring support for C++ - even the stupid identifier rename doesn't work reliably, and that is the only C++ refactoring feature VS has (and we had to wait until 2015 for it!).

Sorry, but that something is hard doesn't mean it can't be done - especially when there are free tools which can do it and orders of magnitude better. Even Visual Studio can do it - if you buy the Visual Assist plugin.

Even the C# tooling isn't that great - yes, it does work better (I often do combined C++/C# projects) than the C++ one, but that isn't a particularly high bar to clear. Ever tried the Java tooling in Eclipse? Heck, that editor and the various assist features almost write your code for you ...

Where Visual Studio is great is the additional tooling - integrated debugger and the various profiling functions. Eclipse can't really compete with that, even though most of the tools (gdb, gprof, etc.) exist and have a lot of features that even VS's debugger doesn't have (e.g. reverse debugging, record/replay, etc.). Sadly, the Eclipse front-end doesn't support all of them.

However, the Visual Studio's editor and the tooling associated with actually writing and managing code (i.e. the part a programmer spends most time in) is stuck straight in the late 90's - just look at how code navigation (jumping to functions, navigating between files, projects, etc.) works in the ancient Visual Studio 6 and 2015 - it is almost identical.

There is also zero integration for alternative build systems like CMake. That is important for writing cross-platform code or even just basic sanity on large projects - the Microsoft's build configuration/project/solution system with millions of settings buried in cluttered dialog boxes is just horrid and error prone, especially with their myriads of mutually incompatible options - set one option incorrectly on one library and your project starts to mysteriously crash. Good luck finding that - and then spending hours recompiling your project :( However, nothing else is really supported properly. (Yeah, I know that CMake generates VS solutions/projects, but there is no integration within VS for it - so you have to be careful how you add files, how you change build options, etc.)

The saddest thing is that many of these issues are pure usability problems where changing/correcting a stupid design decision made a long time ago would reduce the programmer's frustration by maybe 80% already - like the modal blocking Intellisense operation windows, the ergonomic disaster that are the build configuration dialogs, the decision to show compilation error with templates on the deepest template in the expansion (i.e. somewhere in the STL header) instead of where it has occurred in your code, forcing you to mentally parse several pages of error messages to find the line number or just resorting the Intellisense matches to show you the project-relevant ones first instead of the unrelated stuff from the frameworks/header files so that you don't have to dig through it. I.e. common sense stuff.

C++ isn't going anywhere on Windows, no matter what MS and C# advocates say and it is really sad that the official tooling is in such poor shape.

I could continue ranting about many other things that are regularly driving me up the wall at work (like the incredibly slow and memory hungry compiler, horrid and unreadable error messages, especially for templates, etc.), but that would be pointless. IDEs are a matter of strong preferences and opinions, but I do believe that with Visual Studio, especially their C++ tooling, Microsoft has missed the boat many moons ago. And isn't even trying to catch up as it seems - they rather focus on adding stuff like Node.js support (are any Node programmers using VS or even Windows??).


.. if this is you not getting started, I'm kind of frightened for when it does happen ;)


Ah, sorry for the rant.

However it does piss me off when I see people uncritically praising VS as the best thing after sliced bread and flat out dismiss the alternatives without even looking at them - when VS is really terrible IDE.

Unfortunately, this is something that is difficult to comprehend for someone who has never seen how it actually could work properly - either because they have never been exposed to the alternatives or because they have the VS-centric "workflow" (more like kludges working around VS silliness) ingrained so deeply that anything else will perturb them.


"Something as simple as finding next text search match takes about a second."

I'm not sure how our configurations differ but on a fairly modern machine (say, surface pro 4 i5) I've not had visual studio 2015 slow down, and editing and searching is az zippy as with Vim for me.

I've had the experience that some third party tools have bogged down the experience, though.


I mostly run it on a Xeon workstation with 32 GB RAM, SSD, Nvidia K5000, etc. I think you can call it a modern machine. Other software runs fast.

I don't have any third party software installed on top of VS2015.

Only unusual aspect is that I also have several other Visual Studio versions installed, like VS2008, VS2010, VS2012 and VS2013.

I also have many Windows SDK and DDK (driver devkit) versions.

But that shouldn't affect, right?


I run it at work on Xeon with SSD and 16 GB. Project sizes might be different. I don't have experience how the performance would scale with a project with tens of millions of lines of code, for example. (seems to be fine for me below that).

It does exhibit strange behaviour (including becoming suddenly very tardy) that only goes away by deleting the local cache files (intellisense db and whatnot) so I'm not claiming it's technically perfect.


It is meaningless to make such comparisons unless you also compare codebase sizes. VS is perfectly zippy if you have a small project - but try to hit CTRL+SPACE to complete an Unreal Engine or OpenCV identifier! You will be sitting there for 30 seconds or longer while VS is busy parsing/searching files ...


There's a box you can tick that says no to all of that stuff, though I don't know if you can access it after the first run. (I never have to log in when I run Visual Studio.)


No, at least in the version I have, license expires quickly.

If I don't enter the credentials, it won't be able to renew the license, so it's not possible to use it at all. Regardless of any checkbox ticks.


The latest update to VS2015 fixes a lot of the performance issues for C#.


Meh, to make Visual Studio do IDE-like things, people install things like ReSharper, built by the same people that are building IntelliJ IDEA. And I worked with both and IntelliJ is much smarter, and has support for way more technologies. And yes, it's built in Java and can be sluggish, but then again I'm using it on both Linux and OS X and runs just fine on Windows too, so it doesn't tie me to an OS I don't want.

Also, I keep mentioning this, but .NET lacks a build tool like Maven or something similar (SBT, Gradle, Leiningen). .NET is still at the level of Ant / Make and .NET devs don't realize what a huge difference that is.


>people install things like ReSharper

This used to be true but VS2015 is actually completely fine without ReSharper. Roslyn is amazing, I've even built my own custom code analyzers (ie. compiler extensions) that integrate in to the entire ecosystem seamlessly (nuget package registers them to VS project, IDE shows you code analysis on the fly, C# compiler uses them on every build, including CI/build server - and you can enforce stuff with it - eg. raise build errors).

>but .NET lacks a build tool like Maven or something similar

There are .NET build tools, FAKE, CAKE, etc. etc. but people don't use them much, tooling integration is notably missing. It would be nice to have something like Gradle in .NET but VS support for msbuild is good enough for most.


FAKE and CAKE are not replacements for Maven, but for Ant/Make. Maven and Maven-like tools on the other hand are so much more.

Here's a personal anecdote. I work on Monix.io, which is a cross-compiled Scala library for the JVM and Javascript/Scala.js.

I could easily do that because of SBT, which is Scala's flavor of Maven. Basically Scala.js comes with SBT and Maven plugins. And you can easily configure your project, indicating which sources should be shared between the JVM and Scala.js and which sources are specific. And afterwards SBT takes care of compilation for multiple targets, building JAR files for both the JVM and Scala.js. And then you can also sign those packages and deploy them on Sonatype / Maven Central. The tests are also cross-compiled too and running whenever I do "sbt test", no tricks required or effort needed to do it. And this project has a lot of cross-compiled tests.

And if I were to work on a mixed JVM/Scala.js project, I could also include a whole assets pipeline in it, with the help of sbt-web, another SBT plugin. SBT and Maven have a lot of plugins available.

The Scala.js ecosystem relies on Maven / SBT for build and dependency management. There's no "npm install" for us, no Gulp vs Grunt cluster fucks, no artificial separation between "real Scala" and Scala.js, only what's required. And trust me, it's a much, much saner environment.

Compare with Fable. Compare with FunScript. Don't get me wrong, I think these projects are also brilliant. But it's because of the ecosystem that they'll never be popular, because the .NET ecosystem inhibits reusable F# cross-compiled libraries.


Umm, .NET has nuget packages.


The JetBrains folks make incredible tooling.

I got into their stuff via pycharm and phpstorm and then ended up using intellij with the php and python plugins (plus node and a bunch of others).

I haven't enjoyed using a development tool so much since back when Borland was good, intellij is pretty much 99% of my development tool usage at this point and that 1% is mostly nano for quick 'I need to edit this one line' edits.

The thing they seem to get so right is that intellij with say php plugin feels like phpstorm and with python plugin like pycharm at the same time, it's not the features so much as how they are all seamless integrated and the tool as a whole feels designed.

One of the few bills I genuinely don't mind paying each month, everything thing else I've tried just doesn't compare*

* Your Milage May Vary.



No, FAKE is not a replacement for Maven. As I said, .NET is left behind in the Ant/Make era.

And since you mentioned an F# project, the absence of a Maven-like build, dependency management and deployment tool is why F#'s Javascript compilers are behind, because you have no shared infrastructure, no cross-compiled libraries, etc, because it's just too painful to do it.


Why isnt it a replacement.

NuGet/Packet for dependencies.

Octopus/Web Deploy etc for deployment.

All of which can be used from this tool.


>Also, I keep mentioning this, but .NET lacks a build tool like Maven or something similar (SBT, Gradle, Leiningen).

You mean in terms of integration with Visual Studio? Because both Maven and Gradle can compile .NET code (msbuild or xbuild). Gradle doesn't natively support Nuget's format, but there are plugins for it.

> .NET is still at the level of Ant / Make and .NET devs don't realize what a huge difference that is.

Gradle just uses javac under the hood.


What am I missing building projects with VS or xbuild?


For one, sane dependency management that doesn't require vendoring.


You can reference projects or other files that are in submodules. What about the .NET build process makes vendoring the only option?

Even so, it's always been more convenient in my experience when something just ships with fixed versions of dependencies already inside it. Those versions are known to work with the codebase and there's no chance of new bugs or other incompatible behavior being introduced from updates to the dependencies.


> What about the .NET build process makes vendoring the only option?

AFAIK, even NuGet doesn't allow you to just check in the dependency spec, rather than the contents of all of your dependencies.

> Even so, it's always been more convenient in my experience when something just ships with fixed versions of dependencies already inside it.

I agree. But that doesn't mean that they belong in Git, nor that they should even be copied to every single project.


> AFAIK, even NuGet doesn't allow you to just check in the dependency spec, rather than the contents of all of your dependencies.

I have a project where I only checked in the solution's repositories.config (packages/repositories.config) and the project's packages.config (ProjectName/packages.config), and it seems to work fine.


I'd have to agree, though I spend most of my time in the node space, today via VS Code (though I don't use the debugger at all). VS Proper with C# has felt better to work with than any Java tool/project I've ever used, though haven't done much with that ever. I jumped on the C# bandwagon in late 2001, early 2002 when it was still in development. Started moving over towards node around 5-6 years ago, and am currently looking for a reason to work with rust or go.

Most of my issues with C# have always, like Java, been about "Enterprise" frameworks, patters and practices implemented where they aren't needed, and complexity for complexity's sake. VS + C# is pretty damned nice, if you can work on a green project and not bring in everything under the sun.


VS was probably always a better coding and debugging environment. I've used VS as far back as VS 2010 and it's always been great for C#, but I've never had more than a mediocre experience writing Java in an IDE. Did anything support VS's level of convenient debugging ability for Java?


I have not used VS as much as I have used Java IDEs, but I'd say they are on par. VS shines in how well the debugger is integrated with the rest of the environment, but both NetBeans and IDEA offer very close to the same level of convenience. Admittedly, I have never liked the Eclipse debugger.


> As for the NIH syndrome, one of my favorite Paul Graham essays is still "What Languages Fix" (http://www.paulgraham.com/fix.html) particularly for C#: the problem that C# was invented to solve is that Microsoft doesn't control Java.

That's not what Graham said. He said, "C#: Java is controlled by Sun", which is quite a bit different from Microsoft not controlling Java.

Microsoft is fine with using and promoting languages they do not control. C++, for example.

The problem with Java from Microsoft's point of view was that Sun did not want people to use Java as a mere programming language. They saw Java as a platform, and wanted people to develop for that platform instead of developing for Windows, or Mac, or Linux, or whatever. Sun wanted all access from Java programs to the underlying facilities of the OS upon which a particular instance of the Java platform ran to go through the Java platform.

Microsoft wanted something with the good parts of the Java language without the baggage and restrictions of the Java platform, and so they made C#.


>and I'll bet that "open source" doesn't mean to MS what it means to the open source world

The github repo use the MIT licence, so I don't see what you're talking about.

If you want it in your distribution repos, ask your distribution maintainers (and I bet Microsoft is going to do so anyway for the big ones)


Yes, please ask them....happy to help adding PowerShell to any distribution .NET Core supports.


》I'll bet that "open source" doesn't mean to MS what it means to the open source world.

That doesn't make sense. ASP.Net, .Net Core, F# are all good examples of open source projects. This announcement promises the same for Powershell: Development with the community.

What are you missing? What's the criticism?

If you want to see a broken open source project, check out Android instead..


> if I can't apt-get it, I'm not interested,

It was released just now, under MIT license. Give it a little time and it'll show up.


Or even more! It will need to be backported to Trusty Tahr if one wants to apt-get it on Windows (subsystem for Linux) ;-)


WSL will end up on Xenial at least.


except that C# as a language is by leaps and bounds better than Java (both in syntax and useful features departments), so there have been other problems to solve, too.


Its ecosystem isn’t. C# will never get accepted as an open source product. It only has open source code, it’s not actually ‘open-source.’ Compare it with Typescript, which—despite originating from Microsoft—is a truly open project, and getting love left and right.

C# might be better than Java/JVM, but it’s not better enough. The culture/ecosystem barrier is so high that C# would have to be miles ahead, technically superior in every way, to overcome it. I do hate the “worse is better” adage, but there’s no mistaking it, it applies here.

It’s just too little, too late.

But Typescript is awesome, keep it up.

Ironically, Google’s competitor (Closure Compiler) has actually been unsuccessful for similar reasons. To this date its main repo is just a clone of the internal master. For whatever reason they’ve never attempted to rebase on open-source release.


What do you mean by it not being open source? The core CLR is MIT licensed and the compiler is Apache licensed.

This sounds like fud to me.


I have no issue with the license.


So it's maintained in the open on GitHub, it's technically open source in terms of licensing. Yet you claim it's not really open source. Care to clarify?


It’s technically open-source, that’s the point. There’s more to open-source than license. Sorry but there’s no way for me to clarify without just repeating my original comment.


Your original comments mostly contained your personal opinions, not actual facts. The fact is that it is open source and you are FUDing.


> Your original comments mostly contained your personal opinions

I never tried to pass it as anything else.


It is open source in every sense. The development is open, they accept patches/contributions etc.

.net is far more of a true open source framework than android is.


> It’s technically open-source, that’s the point. There’s more to open-source than license. Sorry but there’s no way for me to clarify without just repeating my original comment.

It's free software, and in that sense the license is the only thing that really matters. However, if you're discussing open collaboration styles then that's a whole different discussion. Either your project is free software or it isn't. Whether it has a diverse and open development community is a separate problem, and doesn't fall under "is this project [free software]".


There’s more to open-source than license.

Such as? You seem to have a mental model of things that make a project objectively open source, that don't include the license. I'd be curious what those things are.


I really don’t, it’s more of a feeling. With an open-source release like .NET it seems more like better documentation. In fact that was the case for early commercial Unixes—you needed the source code to actually use the system, but it wasn’t open-source.

Open-source as-documentation (for lack of better term) is still useful. It makes bug fixing a whole lot easier, for one thing. But it’s not quite the same as open-source ecosystem. For that you need to have a diverse set of actors, sharing the same goal. That’s what I think successful open-source project makes. You need to accept the fact that the project is not just yours. Something like that.

Of course Microsoft could do all those things. Who knows, it they’re determined enough they might turn it around. The problem here is like I said Java is just good enough. No one really cares, except people that could use some better documentation, that have been already invested in the ecosystem. That’s why open-sourcing is still valuable, but also why they’ll never gain any adoption of the kind they’d need.

Sorry if that sounds like rambling, it’s sort of late.


by that definition, it'd be hard to call Python open source.


do you more mean like, the decisions, and planned changes, etc, aren't open? (along with being tied to the whims of the CEO and the company's money?)


They might be open, but there’s democracy and then there’s democracy. See for example recent MSBuild incident (but don’t try to argue about it it’s just an example).

As I said, it’s a feeling. The feeling is it’s Microsoft’s project, everyone else is along for a ride. And that’s fine, but it’s something different. Let’s just not pretend technical merits drive adoption, that’s rarely true.


> They might be open, but there’s democracy and then there’s democracy. See for example recent MSBuild incident (but don’t try to argue about it it’s just an example).

> As I said, it’s a feeling. The feeling is it’s Microsoft’s project, everyone else is along for a ride. And that’s fine, but it’s something different. Let’s just not pretend technical merits drive adoption, that’s rarely true.

Uhm. So many free software projects work like that. A company creates something, releases it as free software. Yes, people contribute (and that's awesome by the way) but in general all of the engineering talent works at the company because they wrote it in the first place (and they hire contributors). At SUSE this is how YaST, zypper, spacewalk, et al development works (with significant contributions from the openSUSE community, but we have teams of people working on those projects so our contributions are more of a concentrated effort). There's nothing new or different about this model of free software development (Sun did the same thing with OpenSolaris and Joyent does the same thing with SmartOS). Yes, GNU and Linux follow the hobbyist model but that's not how all projects work.


I was too harsh with the ‘open-source as-documentation’ term.

My point is this is just not enough to compete with JVM, which is already a vibrant open-source ecosystem.


i have a hard time with this argument. on one hand what you say is true: C# is a strictly smaller community than java. OTOH that's true of pretty much any language, and yet python, ruby, elixir, swift, golang, etc. communities are healthy and vibrant.

if what you really mean is 'java people won't switch to C# anyway', then i agree, but C# isn't a really a language for them. it's a language for people who don't like and/or aren't forced to use java by their employers.


People who aren’t forced to use Java will choose Scala or other JVM langs, like Kotlin, Ceylon (a favorite of mine) or even Clojure.

C# the language is not exactly that exciting. I get it, it looks attractive next to Java, but it’s still a verbose, corporate-first, sort of thing. If anything, F# is much more competitive. Too bad it’s on CLR.


[disclaimer, also MS employee].

This has already devolved into opinion territory but I don't think you're giving C# enough credit.

I picked up F# relatively early in it's lifetime (2006ish?), back then there were many language features in F# that you just couldn't do in C#. The gap closed a lot when C# got LINQ, generics, and lambda/first-class functions (these are relatively old language features by now).

If I want to write in an quasi-functional-programming language style I can do it without having the language get in my way. I certainly wouldn't call it a "verbose, corporate-first" language, although the fact that it can be used for that is a bonus.


Don’t forget Rx. Not exactly language feature, but certainly a great contribution to come out of C#/.NET. And who knows if it would have happened without LINQ.

I like the language. Just not enough to use it over JVM. And I think most people feel the same way.


That is not quite true.

Many of us doing enterprise consulting do jump between Java and .NET projects all the time.

Sometimes even doing mixed projects, like the UI in .NET and the backend in Java.


i'll grant you that after a very brief consideration

> UI in .NET and the backend in Java.

makes a lot of sense.


The problem with native desktop Java applications that although Swing is quite powerful, it requires developers to go through books like "Filthy Rich Clients"[0] to really take advantage of it. Which very few do.

To this day I still meet developers that aren't aware how to change the default L&F to native widgets, for example.

Whereas Windows Forms and WPF already provide a much easier out of the box experience, and have tooling like Blend available.

I am curious what the JavaFX sessions at JavaONE will cover.

[0] http://filthyrichclients.org/


As if C# were any less driven by PHB-dictated internal enterprise mandates than Java is.


That's not what Worse Is Better means. Did you read the Gabriel paper?

/nitpick


I got it from Unix’s haters handbook.


Well, you clearly didn't get the full definition. Go read Lisp: The Good News, The Bad News, And How To Win Big


Sure, it beats Java, but the VM is worse for running other languages, and that's where .NET loses. F# tries to be nice, but reified generics are more of a limitation than help in that world.

In JVM land, now Java just hands you some specific well tested libraries: You write business code with Scala or Clojure, which I'd pick over C# by about as much as I'd pick C# over Java 7.

That said, I have far more faith in Microsoft improving their tooling than I have about Oracle doing the same: It's just that Oracle has to carry a far smaller weight, because the good JVM languages aren't even theirs.


Uhm, the CLR (hence the name) was specifically designed to host different languages that can interoperate with each other and unlike the JVM was not built around the capabilities and limitations of a single language.


> Sure, it beats Java, but the VM is worse for running other languages,

based on what criteria?


>I'll bet that "open source" doesn't mean to MS what it means to the open source world.

Before Satya Nadella took over it didn't. Now it mostly seems to.


Ah, that's beautiful. Using a true-Scotsman argument to claim it's not "Real" open source, while denigrating Microsoft for NIH. Classic.


> ... if I can't apt-get it, I'm not interested ...

I feel bad for you, then. The Ubuntu repos (and Debian) have almost nothing that is even remotely new and will usually be behind on most things. Trying to keep configs for an ArchLinux box and Ubuntu box synchronized is a bitch if new versions have good features, because you can't use them in your general config unless you give up on apt and install from source.


> This is why I find PowerShell to be an answer to a question nobody asked. If bash/zsh isn't cutting it for you, drop into something like Python, Ruby or Perl to do the heavy lifting.

I'm not sure this is really right though. You could make the same argument against Ruby or Python in favor of Perl or Pike, could you not?

PowerShell syntax is quite minimal, has a novel methodology for strongly typed interactive use, and has the ability to directly invoke managed code. You can write shell components in any .NET language and invoke them (with care at writing time, this can be done very cleanly, but even without said care it's possible with a bit of a mess).

> PowerShell will never accumulate a library as comprehensive as what any of those three have, each has had decades to accumulate packages of all kinds, and more are still being added.

It has the entire .NET library at its disposal. I certainly never feel a lack of support using it.


> This is why I find PowerShell to be an answer to a question nobody asked. If bash/zsh isn't cutting it for you, drop into something like Python, Ruby or Perl to do the heavy lifting.

How are Python/Ruby/Perl going to give you structured objects from "ps"?

That's the promise of object-based pipes. You can get useful records out of your command-line tools without having to write an ad-hoc regex.


I find that to be the issue. You are considering it just RAW text when it is actually formatted text that has been parsable for years with common unix command line tools. It not being in the format you consider a structured object does not mean it's not a object or even parsable. If you are using ad hoc regex I suspect you are not using all the tools available to you.

I feel like Kernighan and Pike do a much better job of explaining than I could ever.

https://www.amazon.com/Unix-Programming-Environment-Prentice...


> You are considering it just RAW text when it is actually formatted text that has been parsable for years with common unix command line tools.

Parsing command output with sed/awk/etc (ie. "common unix command line tools") is absolutely an ad hoc parser.

Let me give you an example that I recently ran into.

I have a tool that parses the output of "readelf -sW", which dumps symbols and their sizes. The output normally looks like this:

     885: 000000000043f0a0   249 FUNC    WEAK   DEFAULT   13 _ZNSt6vectorIPN3re23DFA5StateESaIS3_EE19_M_emplace_back_auxIJRKS3_EEEvDpOT_
     886: 000000000041c380    64 FUNC    GLOBAL DEFAULT   13 _dwarf_get_return_address_reg
     887: 0000000000424e60   122 FUNC    GLOBAL DEFAULT   13 _dwarf_decode_s_leb128
     888: 000000000043dca0   157 FUNC    GLOBAL DEFAULT   13 _ZN3re23DFA10StateSaverC2EPS0_PNS0_5StateE
So I wrote a regex to parse this. Seems pretty straightforward, right?

But then I noticed a bug where some symbols were not showing up. And it turns out those symbols look like this:

     5898: 00000000001a4d80 0x801058 OBJECT  GLOBAL DEFAULT   33 _ZN8tcmalloc6Static9pageheap_E
Notice the difference? Because it's a large symbol, readelf decided to print it starting with "0x" and in hex instead of decimal. I had to update my regex to accommodate this.

That is what makes a parser "ad hoc". You write a parser based on the examples you have seen, but other examples might break your parser. Parsing text robustly is non-trivial.

Worse, it is an unnecessary cognitive burden. Readelf already had this data in a structured format, why does it have to go to text and back? Why do I have to spend mental cycles figuring out which "common unix command-line tools" (and what options) can parse it back into a data structure?


The Unix answer is why are you using a regex on tab-separated values? Wrong tool for the job.

Of course the problem with Unix is that there are a thousand different semi-structured text formats, edge cases, and tools that must be mastered before you can make any sense of it all. Any time you point out the pain a Unix fan can just respond by pointing out your ignorance.


> The Unix answer is why are you using a regex on tab-separated values?

They aren't tab-separated. There are no tabs in readelf output.

Also if you assume they are space-separated, note that the first few lines look like this:

    Symbol table '.dynsym' contains 164 entries:
       Num:    Value          Size Type    Bind   Vis      Ndx Name
         0: 0000000000000000     0 NOTYPE  LOCAL  DEFAULT  UND
> respond by pointing out your ignorance.

Indeed.


I've actually become a pretty big fan of line terminated json (all string linefeeds, etc are escaped). Each line is a separate JSON object... In this way, it's very easy to parse, handle your use case, and even pipe-through as JSON some more.

In this case, you can have objects as text, and still get plain text streaming with the ability to use just about any language you like for processing.


First off, readelf shouldn't switch between hex and base10. Secondly, that's DSV, so you shouldn't have written a regex for it. You should have either cut, or awk, both tools SPECIFICALLY DESIGNED to do what you want.


What's DSV?


DSV: Delimiter Separated values. readelf uses a delimiter matching the regex /\w+/. In AWK, this is $FS by default, so AWK will parse this by default. Or you can pipe

  tr -s [:blank:] 
To cut, which will give you the row you want.


I would like to see how it will look in PowerShell. $(readelf).ShowMeTheSecondStringFromTheEnd or $(readelf).PrintTheLastColumnInHeX?


A comparable PowerShell cmdlet would give you one object per line with properties corresponding to the columns. And no, those properties usually have sensible names, instead of "LastColumn".

Of course, for wrapping the native command you'd still have to do text parsing if you want objects. This was more as a comparison of the different worlds here not so much as "if I ran readelf in PowerShell it would get magically different output".


I happen to find your example of why to use a object a bit hilarious.

You are right, readelf has the object in a structure -- because that is what elf is...

           typedef struct {
               uint32_t      st_name;
               unsigned char st_info;
               unsigned char st_other;
               uint16_t      st_shndx;
               Elf64_Addr    st_value;
               uint64_t      st_size;
           } Elf64_Sym;

If you wanted the object why did you need readelf in the first place? Why not just read the elf format directly and bypass readelf all together? That seems to be what you are advocating by having readelf passing a object instead of what it does today.


You're asking me why I use a tool instead of parsing a binary format manually? Does that really need explanation?

If that is your attitude, why use any command-line tools ever? Why use "ls" when you can call readdir()? Why use "ps" when you can parse /proc?

You just pointed me to Kernighan and Pike a second ago. I didn't expect I would need to justify why piping standard tools together is better than programming everything manually.


I never said anything about not liking command line tools. In fact I love them and think they do a awesome job!

In any case you just proved my point. You think its insane to parse binary data while scripting and I do too. That is why I think the passing binary objects is insane on the shell.

Now if you were talking about text base objects (not binary ones) then that is an entirely different story and I feel that is what we do today. In your example you have rows which could be called objects, and members which would be separated out in columns. To argue a different text base format is better than another is not something I am interested in doing -- mostly because there are a million different ways one could format the output. If you were to do "objects" I think they would have to be in binary to get any of the benefits one could perceived.

To be honest I feel the output you posted is a bug in readlef. I would expect all data from that column to be in the same base.

I will level with you I can see some benefits of having binary passed between command line programs but I think the harm it would do would outweigh the benefit.

But if you you really wanted to do that you could. There is nothing stopping command line utility makers from outputting a binary or any other formats of text. You don't need shell to make that happen.

What I think everybody is asking for is for command line developers to standardize their output to something parsable -- which I feel that most command line utilities already do that. They give you many different ways to format the data as it is. Some do this better than others, and I think that would hold true even if somebody forced all programs to only produce binary, or json text format when pipped.


This isn't about binary vs text, it is about structured vs. unstructured.

The legacy of UNIX is flat text. Yes it may be expressing some underlying structure, but you can't access that structure unless you write a parser. Writing that parser is error-prone and unnecessary cognitive burden.

PowerShell makes it so the structure of objects is automatically propagated between processes. This is undeniably an improvement.

I'm not saying PowerShell is perfect. From what I understand the objects are some kind of COM or .NET thing, which seems unnecessary to me. JSON or some other structured format would suffice. What matters is that it has structure.

I still don't think you appreciate how fragile your ad hoc parsers are. When things go wrong, you say you "feel" readelf has a bug. What if they disagree with you and they "feel" it is correct? There is no document that says what readelf output promises to do. You're writing parsers based on your expectations, but no one ever promised to meet your expectations. But if the data was in JSON, then there would be a promise that the data follows the JSON spec.


> From what I understand the objects are some kind of COM or .NET thing, which seems unnecessary to me. JSON or some other structured format would suffice.

They are .NET objects, which, in some cases wrap COM or WMI objects. The nice thing about them isn't just properties, though. You can also have methods. E.g. the service objects you get from Get-Service have a Start() and Stop() method; Process objects returned from Get-Process allow you to interact with that process. Basically wherever a .NET class already existed to encapsulate the information, that was used which gets you a lot more functionality than just the data contained in properties.


If the data was in JSON it would promise a that it followed the JSON spec -- but its not, it follows its defined spec, which in the case of readelf is apparently undefined.

Other programs that expect to be machine parsable define in great detail the output. In your initial post I replied to you mentioned ps. In the case of ps it has many options to help you get the data you want without using standard parsing tools. That is because its output was expected to be consumed by both humans and possibility other programs.

Now take readelf on the other hand. It clearly talks about in its man page about being more readable. Its author cares about how it will look on a terminal and even goes through the effort to implement -W which makes it nice to view on larger terminals. It even shows in print_vma, where somebody wen tout of their way to print hex if the number was larger than 99999. If the author really cared about the ability to be parsed they would have added a OUTPUT FORMAT CONTROL section that would provide you the contract you are looking for. Just saying if the data was in JSON does not solve your problem. Why? Because the author of readelf did not spend time to define its output properly in the man page it is not likely he/she would have implemented a json output type when piped little alone take the time to provide the object structures in the man page.

You say it's not about binary vs text but I don't think that can be said. There are lots of things to consider.

* Speed of encoding and decoding. * Memory consumption issues with larger objects needing to be fully decoded before being able to be used or processed. * Binary data would need to be encoded and would likely result in much more overhead.

Its not clear to me that a binary option would not be better than a text one. Pipes today are not just used for simple scripts and system management.

There are lots of things that concern me, maybe it is just the implementation details.

* Not all command line programs are written with the expectation to be parsed. How do we handle that? Force the programmer to make all output parsable regardless if they ever intended on the program being used in some script? * Would a program put the same thing to stdout even if it was flowing to a terminal? Are terminals not for humans? * Would structure be enforced? One of the awesome things about stdin/stdout is that you can send ANY data you want.

That all said I would love it if programs who intended on their output to be parsed offered a JSON output. I am not against structured output. I am against forcing programmers to shoehorn their output into some format that may not be the best for their program. I think a well designed and documented command line tool that expects to be parsed by other programs will go out of its way to ensure the format is documented and adhered to when operating.


It does follow a standard. It's DSV. Unix tools are really good at handling that. Awk and cut specifically.


[flagged]


The dollar sign is charming.

There's a few points here:

1) Not all data is text. In fact, very little of the data people see/work with day-to-day is raw text. It's silly to transform a PNG image into text to be able to pipe it around. (Or to pipe around its filename instead and have a dozen tools all having to open and re-parse it each time.)

2) There's nothing on PowerShell preventing you from serializing a piece of data to text if you want to. The key is: you don't have to.

3) Systems that depend on 50,000 CLI tools all having their own ad-hoc text parsers are cemented, mummified, cannot change. You can't change the output format of ps (to use an example in this thread) without breaking an unknown number of CLI tools. Even if you come up with a great way to improve the output, doesn't matter, you've still broken everything. This is less (but not none!) of an issue with PowerShell. I like computers to evolve to become better over time, and text-based CLIs are a huge anchor preventing that.


Unfortunately PowerShell relies on everything running on .NET (well, I think COM works, too); the idea of a shell that can expose live objects is useful, but PowerShell's platform limitations in reality doing that make it a far from ideal implementation of that concept. Something built on a protocol that is platform agnostic would be better.


Live objects don’t usually expose any protocols at all. They only expose an ABI. Do you know any platform-agnostic OO ABI, besides what’s in .NET?

If you’ll wrap your objects into some platform-agnostic protocol like JSON, you gonna waste enormous amount of CPU time parsing/formatting those streams at the object’s boundaries.


You can run streams of many millions of JSON objects pretty much as fast as the IO can feed it... most of the time, in situations like this, you're constrained by IO speed, not CPU... assuming you are working with a stream that has flow control.

I tend to dump out data structures to line terminated JSON, and it works out really well for streams, can even gz almost transparently. Parse/stringify has never been the bottleneck... it's usually memory (to hold all the objects being processed, unless you block/pushback on the stream), or IO (the feed/source of said stream can't keep up).


Even if printing and parsing is computationally cheap, memory allocation is less so.

If you expose JSON, each serialize/deserialize will produce another instances of objects, with the same data.

The architecture of PowerShell implies commands in the pipeline can process the same instances, without duplicating them.

Another good thing about passing raw objects instead of JSON — live objects can contain stuff expensive or impossible to serialize. Like an OS handle to an open file. Sure, with JSON you can pass file names instead, but this means commands in your pipeline need to open/close those files. Not only this is slower (opening a file requires kernel call, which in turn does various security checks for user’s group membership and file system’s inherited permissions), but can even cause sharing violations errors when two commands in the pipeline try accessing the same file.


And just think how much faster it would be if there were no serialization and I/O involved at all...


And my method can work over distributed systems with network streams... There are advantages to streams of text.


What advantages?

PowerShell works over networks just fine, because standardized ISO/IEC 17963:2013 protocol a.k.a. WS-Management.


> Even if you come up with a great way to improve the output, doesn't matter, you've still broken everything.

You phrase this as if changing things for the sake of changing was a good thing. It is not.

Well, perhaps it is good for the software vendor, but from the customer's point of view, having to re-learn how to do the same stuff over and over every other year is a PITA.


This is why I have trouble getting along with Linux users.

Without change, there's no improvement.


It's often the customers who are complaining that your current output is not suitable.


First off, you can pipe binary data around. Most tools just expect text.

Secondly, if you used DSV to parse PS, like you should, adding a new column to the end won't break anything. A fancier parser won't even break if you add to the middle, but that's usually not worth the effort to write.


> What the M$ community fails to see is that text streams can be consumed by __everyone__.

Text can be poorly parsed by everyone, yes. I especially love it when the default tools settings mean two different computers will give different text results, because the installation defaults changed at some point. What's not to love about trying to properly escape and unescape strings via the command line, while simultaneously keeping in mind your own shell's escaping, and having scripts which are neither backwards nor forwards compatible? And this is to say nothing of different distros.

It's as if the text was built for humans instead of my tools half the time, or something. I usually try to centralize my parsing of text in one place so when it invariably breaks, I don't have to rewrite my entire shell script.

Some basic structuring - I hesitate to call it a brittle object model, when most of the time I'm dealing with something more akin to C structs than invariant and abstraction laden Smalltalk or somesuch, or Java's sprawl of factories and inheritance - makes things a bit easier. New fields can be added without breaking my regular expressions. I don't need to worry about escapes to handle special characters. I can trivially dump to text for display (by simply running the command as this is the default behavior of the shell), or to feed into text consuming tools.


It's even more than that: text formatting allows the use of generic filtering and text processing tools. Whereas if you are using objects, you will tend to use less on-the-fly command line composition, and write or reuse more tools dedicated to one particular object or the other. In the end I'm not sure keeping the structuring as the default use case yields an usage so much improved in the real world, because if you work on a broad number of types you will tend to need to know more specialised tools, instead of generic ones, and have less higher-order composition tools at your disposal.

Now of course you can always format to text from your structured object, but this does not matter. What matters is what is convenient in the proposed UI, and what is mainly practiced in the real world because of such convenience and the amplification loop it creates between tools authors and users.

Also some objects are originally handled in text form, and their structured decomposition is extremely complex compared to an informal description and the naturally refinable heuristic which comes with it. For example you can grep into a source code in any language (with more or less efficiency depending on the language, but at least if there is a unique symbol you will find it), whereas trying to get and handle it in a structured way basically means you would need half of a (modular) compiler, and a huge manual to describe the structure, and possibly a non trivial amount of code to actually do the lookup.

The PowerShell approach is not all-bad, though, and obviously there are some usages where it superior to text based shells. But for a day to day usage as a general purpose shell, and a programmer shell, I'll stick to the Unix approach.


Structured data allows the use of generic filtering and structured data processing tools. The basic requirement is reflection, objects being able to tell you about their structural composition.

If code was stored in a structured representation you could still search for a string object containing a symbol name in the structured representation. You can match a structure pattern just like you match a regex to text.

Typical shells can be thought of as REPL for a programming language that makes the file system easily accessible and uses said file system or pipes to pass around data between functions/commands, sometimes as human-readable text. Most programming languages don't encourage passing data around as strings.


Nothing prevents me from changing those objects into a text stream. In fact, it's infinitely easier than turning a text stream into objects.


I feel you missed my point. It's not just a stream of raw data. It's a stream of formatted text... There is no magic or hand waving involved.


I guess that's why HTTP2 is now binary, because text is awesome.

https://http2.github.io/faq/#why-is-http2-binary


> How are Python/Ruby/Perl going to give you structured objects from "ps"?

The usual answer would be "by parsing /proc instead and returning meaningful objects that expose the relevant parts of the OS" but why would you want to do that when you could just output the relevant data in easy to parse XML or JSON?


For the most common use cases there would be a standard function to get structured objects, in Python you have os.listdir() instead of ls, etc. If there is none then you can call ps and parse the result manually or call the system functions with c interop.


For programming, I agree. However, Python doesn't work as a system shell; for that, you want something optimized for running commands, without having to wrap them in a function call with a list of arguments as strings.


did you try xonsh?


I did, and i found it a pretty big chore in practice. Its two modes (shell mode and python mode), never quite knowing which one you're in, I could never quite get used to that. PowerShell doesn't have this problem, because it was designed for the command line first and for scripts second (which is, incidentally, also its major downside if you ask me).


> Its two modes (shell mode and python mode), never quite knowing which one you're in, I could never quite get used to that.

Drive-by armchair comment: it seems like the obvious answer would be to change the prompt depending on what mode you're in?


I find this criticism of xonsh's "modes" baffling. Xonsh is brilliant precisely because it isn't modal the way, e.g., vi is.

Xonsh lets you freely mix python code and command invocations in a highly transparent way.


you can force captured subprocess mode if necessary


Powershell can use any .NET library... "Nobody" meaning you. Which is perfectly fine. But there are quite a few users of Powershell out there.


Now that it exists there are users, but when people were hoping for a modern or more portable replacement for DOS batch files I'm not sure this is what they had in mind.


Powershell In Action covers a bit of the history and decision making around powershell; interesting stuff.

Ultimately, given that powershell was initially targeted at systems admins/engineers and that Windows is highly object-oriented(in contrast to Linux's text streams), I believe Powershell was a apt development. Having done plenty of systems engineering across linux and Windows it seems to fit the needs of Windows pretty well; a fantastically unique shell that wouldn't have existed if not for Windows.


Think of it as a replacement for vbscript instead, with a commandline.


> PowerShell to be an answer to a question nobody asked.

It's conceivable that people working in the MS World who have built up (or inherited) a base of PowerShell code are asking, "what would it take to run this on Linux, or at least some of it in some way?"

Of course programmers who don't currently work with the PowerShell at all are probably not asking "I want the PowerShell on Linux" in any significant numbers.


FYI IronRuby never made it into a usable state, never came close to other implementations like JRuby or Rubinius and IronPython hasn't received an update since the end of 2014, which for an open-source project like a language implementation means it's nearly dead.

That said adopting Ruby or Python as a shell language wouldn't make sense. Especially Python with its white space aware syntax, I mean can you imagine piping commands at the command line using Python, as I can't. These languages haven't made it as shell replacements in UNIX either, even though everyone complains about Bash.


Last IronPython release: 2014

Last IronRuby release: 2011


Both of these two projects attempted to solve a problem that very few people have - using .NET libraries from Python and Ruby. In exchange, you had to give up all of your existing native libraries. That is a trade-off that very few people wanted. Source: I started the IronRuby project.

I learned this lesson the hard way a few times in my career - always have respect for people's existing code investments.


Looks like things are changing in IronPython land:

https://www.infoq.com/news/2016/08/IronPython-Leadership


I can't speak to Ruby, but Microsoft has an active Python group.[1] Granted, it seems to focus on Visual Studio (which I'm told has excellent Python tooling) and Azure.

[1]: https://blogs.msdn.microsoft.com/pythonengineering/


Neat, but that doesn't seem to be integrated into .NET like IronPython was (well, is, just not maintained it seems). Which is what the post I was responding to was talking about.


Ah, touché. I'm not a Windows person, so I don't know the boundaries well.


I used to love Python when I came to it years about after slogging along through QBasic, VB6, C++ and C, but in this day and age, it doesn't really offer much to me over doing things in modern C#, especially on the CLR. I appreciate some of the changes that were made to the runtime to support IronPython, like dynamic support and the DLR, but if I'm going to be doing something not-C#, then I'd rather go towards F#.


lately i use nodejs and have replaced the cmd tools with node modules. It is more verbose then piping shell commands but easier to understand and the same script can be reused across platforms.


Really? I have to do this to execute docker commands and it is painful. A lot of child_process I/O and it gets hairy.


I find some of it's a lot easier with shelljs or mz, but ymmv. I also like that streams of line delimited JSON is really nice as well.


> If bash/zsh isn't cutting it for you, drop into something like Python, Ruby or Perl to do the heavy lifting.

or Lua?


Hehe, I was so happy when I came across a Lua module for Windows administration. And so sad when I found out it was meant for Windows NT 4.0 administration. :(

A current reincarnation of such a module would be really sweet, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: