I find that very abstract software packages like this are difficult to visualize without an example. This page does not offer one, but TFM does -- see [1]. Also note that TFM describes the fact that this is Windows-only. That makes it substantially less interesting, IMO.
That's great! You should definitely add that to the docs.
I just went to the same page and was extremely disappointed when I saw that it required Windows.
A benchmark comparing it with similar solutions (esp. Neo4j) would be much welcome.
Sure we'll definitely update the docs! Or better to work out one way to put the docs on GitHub and keep it synchronized with the site (they're markdown anyway)
Yes, we will continue working on bringing both the core runtime and the toolchain (TSL compiler, data management tools, more computation modules, language bindings etc.) online and ready for the *nix world.
> At this rate, are we going to see Windows open-sourced?
Doubt it.
In my personal opinion, they're just playing catch up and trying to grab a piece of the pie in the Server space that opensource have been gobbling up.
Linux got web server, cloud, scientific computing, big data (hadoop, spark, etc..), etc...
I believe they're releasing these open source so they can get people on their Azure cloud and get people into their microsoft ecosystem instead. They're emulating what makes Linux so popular, a good ecosystem and also opensource software.
I do not think they will give up Window for free or even opensource at all.
They was willing to lose the internet for desktop. Their mentality was everything goes through the desktop. Google and the internet proved them wrong and made app OS agnostic via webapp. They neglected search engine for desktop and Google ate it up. That's how crazy it is.
There's also a theory of how they dominated Gaming via DirectX so they can keep their OS popular. I doubt they would give up DirectX and gaming lead via opensource.
I think it's a good strategy but I personally love open source ecosystem much more than Microsoft and have trust issues with them in the past.
They're just playing catch up just like Bing vs Google, IE vs Mozilla, etc.. There's still money to be made even though there are clear leader in each space Microsoft neglected.
I had the same misunderstanding as well. However, I think they ought keep Windows 10 Insider program available to everyone free of cost and without having to pay for a license. I feel like this will fall on deaf ears though. Microsoft watchers say Windows already treats stable as a testbed for the enterprise users who withhold updates for n days. So, a new update shows up without much testing, breaks a bunch of stuff on consumer hardware, Microsoft finds out thanks to logs or twitter chatter and fixes it, and everyone including enterprise hardware take the updates.
I think eventually Windows being source available is possible but I doubt it will be free in a meaningful way.
So was Microsoft Office when they were trying to put Lotus, Wordperfect, and every other word processor and spreadsheet out of business. And they succeeded. Follow the pied piper boys and girls...
I am under the impression that MS has a lot of different parts, and that these parts have different cultures and goals. MS Research and their developer division seem really open-source-friendly, but that may or may not say anything about their OS team.
They were also very careful not to touch anything GPL-licensed. The Linux syscalls were all implemented from scratch. So it's effectively a proprietary reimplementation of parts of the Linux kernel. I'm not sure how much that says about open-source plans of Windows itself.
That says more about the fact that that implementation pattern is a core part of the NT OS, and the underlying design of the two kernels is so different that just throwing Linux code in the mix would not make sense (in the way that hearing purple doesn't make sense).
WSL is a elimination of the Linux user mode programming environment. The fact that that requires implementing Linux kernel functionality is simply an artifact of the lack of a more modular design (or really any up front design) in Linux.
> The fact that that requires implementing Linux kernel functionality is simply an artifact of the lack of a more modular design (or really any up front design) in Linux.
Curiously the recommended syscall mechanism on x86 is by calling __kernel_vsyscall in the vDSO. If everybody did that then you could just make your own loader with your own custom vDSO that could implement the syscalls in userspace. However sometimes (especially statically built) programs still makes syscall directly with int 80h, which are slow to trap in userspace or may not even be possible depending on the os.
Now it would have been fantastic if calling through the vDSO had simply been the only documented way of doing a syscall on x86_64, but the kernel developers at that time decided not to do that, so now on x86_64 the syscall instruction is always used directly, and we can't even trap that on any x86_64 OS because most of the time all that happens is either some unrelated syscall is executed, or an error is returned from the kernel back to the calling process without any traps.
> Curiously the recommended syscall mechanism on x86 is by calling __kernel_vsyscall in the vDSO.
There are times when this doesn't work. Syscall resumption and cancellation come to mind. Also, __kernel_vsyscall is a hack to make fast syscalls work on the awful 32-bit x86 architecture, not a nice feature.
> it would have been fantastic if calling through the vDSO had simply been the only documented way of doing a syscall on x86_64
There is no __kernel_vsyscall or similar feature on x86_64.
> There are times when this doesn't work. Syscall resumption and cancellation come to mind. Also, __kernel_vsyscall is a hack to make fast syscalls work on the awful 32-bit x86 architecture, not a nice feature.
It's a pretty nice feature in the context of being able to make compatibility layers on other OS'ses in userspace which was the discussion here. Or would be if it was always used. Why doesn't it work for syscall resumption and cancellation?
> There is no __kernel_vsyscall or similar feature on x86_64.
Yes that was exactly what I was complaining about.
> You can on Linux using seccomp.
Yes but why would I want to make a linux compatibility layer on linux?
> It's a pretty nice feature in the context of being able to make compatibility layers on other OS'ses in userspace which was the discussion here. Or would be if it was always used. Why doesn't it work for syscall resumption and cancellation?
For resumption, a signal that interrupts a resumable syscall points RIP to an explicit int 80 instruction in the vDSO. This behavior would be a bit unfriendly to emulate.
For cancellation, the only good implementation of cancellation that I'm aware of (musl's) relies on syscalls being an actual atomic instruction so that a signal handler can tell whether a syscall actually happened. __kernel_vsyscall is an opaque function and can't be used like this.
> Yes but why would I want to make a linux compatibility layer on linux?
For sandboxing? For experimentation? Or how about to make a compatibility layer emulating something else that runs on Linux?
I don't know if there's a single document somewhere with an explicit roadmap. Their GitHub page (https://github.com/Microsoft/BashOnWindows) includes links to multiple sources that could collectively be considered a sort of roadmap, including the Issue tracker, UserVoice page, team blog, and discussion forums. I've also asked the devs questions on Twitter a couple times (like @richturn_ms) and usually gotten fast responses.
Regarding roadmap, to be honest, we don't really have one, but we do have a primary mission which we're laser-focused on: Make Windows Subsystem for Linux able to run the vast majority of tools developers need to get their work done.
We've taken a somewhat unusual, but HIGHLY valuable approach of building features based almost entirely on feedback from the community. We prioritize syscall implementation based on the frequency with which we see failures due to missing, incomplete, or incorrect syscalls. This way, we focus on delivering the most bang for the buck.
This said, now that our syscall coverage is starting to edge towards covering the majority of mainstream dev tools, we're starting to look at more strategic, features including improving disk perf & support for networking, device, etc.
We'll be publishing more details in a few weeks on our blog above (and, of course via https://twitter.com/richturn_ms) on where we're going to invest effort beyond Creators Update. Look forward to hearing from y'all ;)
I'm interested in this roadmap as well. I currently follow their bash changelog blog, and check that user vote site, but I haven't seen anything like a more concise future plan.
It does have something to do with SQL Server vs the world -- and that is life as usual at Microsoft.
Microsoft had a big leadership role in XML standardization at the W3C but has had little to do with RDF. (ex. Oracle contains a forward-chaining triple store optimized for geospatial work, SQL Server does not.)
The Microsoft SQL server team would naturally oppose any effort to make a competing database. They've tried to deep six the JET engine that powers Microsoft Access because (i) people want access, and (ii) Microsoft SQL server Express/Compact/whatever is not an effective replacement.
Thus, there is not a lot of room under Microsoft's umbrella for a competitive project, but "yet another open source graph database" is not seen as a threat.
Not yet. Their next strategy is to try and use ARM (again) to shove UWP down your throat.
I love Windows but I seriously will not touch UWP until they loosen up the sandbox restrictions so that I can do regular IPC with a win32 desktop app. You can't even send an HTTP request to a little node.js Web server running on your desktop right now. Fuck that. I'm not buying into it.
> I love Windows but I seriously will not touch UWP until they loosen up the sandbox restrictions so that I can do regular IPC with a win32 desktop app.
The whole reason for the UWP sandboxing is to make it so that users can confidently install UWP applications with some assurances that it isn't spyware/malware/blackmailware that will take over their PC or steal their personal information.
UWP is Microsoft's answer to years of criticism they received for the way process permissions work on Windows. They aren't going to just dump it all because you and a few others find the priviledge model inconvenient.
> You can't even send an HTTP request to a little node.js Web server running on your desktop right now.
If your node.js web server was network accessible you could. But, no, it won't allow unchecked IPC as that directly defeats the whole point of sandboxing to begin with.
Personally I haven't used UWP much either, but that has nothing to do with sandboxing and everything to do with poor performance and UI issues (e.g. window controls, built in UI elements, etc). I still find the Windows 10 UWP Calculator horrible to use, it was a MASSIVE downgrade over the old one.
Honestly, this particular complaint reminds me a lot of UAC. After decades of people complaining that Windows account security was too lax, they implement the graphical popup equivalent of `sudo`. Then proceed to get skewered in public opinion for having done it.
Yes, proper security means you will be inconvenienced. In the same way that you would be inconvenienced if your bank called you to confirm that $10000 wire you didn't initiate. It's inconvenient that you have to take a call and resolve the issue, but it's a hell of a lot better than finding out at some indeterminate point in the future that $10000 is missing from your account.
If only. sudo, as configured out-of-the-box on Ubuntu, gets out of your way and lets you work (Except if you haven't properly authenticated in the last 15 minutes); polkit is similarly out-of-your-way.
Sudo is also (relatively) easy to configure: "this command line can be run under sudo by anybody", "this command must not be run under sudo even if they know their password", etc.
UAC is always-in-your-face about everything, often unexpectedly - it hijacks your desktops at inopportune times, and there's no way to tell it "Yes, this software can do this again for the next 15 minutes without asking me again".
UAC is very, very far from being the graphical popup equivalent of sudo. If it had been, people would not have complained as much. (Someone always complains. But in the case of UAC, most of the complaints are justified)
Not true: UAC only pops up for apps that need to access a file/registry key, etc. that affects system-wide behavior
"... often unexpectedly - it hijacks your desktops at inopportune times ..."
This isn't true: UAC pops up precisely when an app is launched that needs to access/modify a system-wide resource. If you're running apps that OFTEN need to access system-wide resources, you could choose to run them as admin and avoid the UAC pop-ups ... while also hoping that those apps are not malicious or over-enthusiastic about "helping".
"and there's no way to tell it "Yes, this software can do this again for the next 15 minutes without asking me again"."
There kinda is - just run the app as Admin. It'll then run with admin rights until you close it.
Of course, without specific examples of what you see as erroneous behavior, I've posted what are, I am sure, inadequate responses to your specific issues. If you'd like to share you specifc issues, I am sure we could have a more fruitful dialog and who knows, I may be able to carry your case internally to see if we can improve things in future OS releases (no guarantees here other than my trying my best).
While what you say was true about Vista, in Windows 7, 8, 8.1, and 10 UAC prompts are suppressed when they're the direct result of user interactions within the OS itself. Each OS reduced the number of UAC prompts a user experiences.
They also broke the security on the default configuration because now you can just interact with the system components to bypass UAC, see e.g. [0]. This is officially not a security vulnerability because UAC isn't actually a security barrier unless you set it to "Always prompt", but just a feature to make applications play nice. But note that it's not on "Always prompt" by default...
To be fair, MS didn't really communicate clearly to their downstream vendors that the OS was getting "locked down" and they didn't wait long enough for the driver manufacturers to catch up to the new standards.
That said, it was ironic that people complained about security in a Windows release.
We can improve security by turning the computer off but you have to balance security and utility. At the moment UWP is way to far towards security.
Until you can run power tools (compilers, IDE's, web servers, system tweakers) as UWP apps it will be inferior to win32.
This is why no one is investing in UWP apps, you don't want to spend months/years developing something only to find out that the app is crippled because of a limitation you weren't aware of.
> Seems a little silly that my UWP app can access everything on my network except for my own darn computer via localhost, doesn't it?
Unfortunately for historical reasons localhost (127.0.0.1) are treated almost like a named pipe. Meaning a LOT of Win32 (and UNIX-style) applications treat data over that path as "trusted."
For one specific example, I've used a HP driver that installs a local webserver for no good reason, and if you can send a specially crafted request it will execute that request in the SYSTEM context. All you need is localhost access and knowledge to pull it off (this is not exploitable remotely).
A lot of software has been designed with the assumption that localhost is trusted and they have therefore used it for IPC. This is exactly what you're attempting to do too. But let me ask you this, what happens if a third party UWP application tries to use your localhost backdoor? Does it allow UWP malware? How are you going to verify that only YOUR UWP application connects to your Node.js instance?
File IPC allows you to limit it to just your UWP application because presumably the file will be within that UWP application's unique storage block. It might be a pain but at least the ultimate result is secure.
One of the programs that does this is dnscache. It wouldnt be all that bad if localhost wasnt the actual named pipe instead of virtual network interface :( - you cant filter localhost traffic with a firewall in windows. This means everything with localhost access can speak to the internet (dns pipe) bypassing any firewall rules you might set.
I don't get the outrage over the rumored UWP only windows. There is no indication that MS would drop win32. This is a separate product, likely for a particular market. It's more of an iPad competitor than a Windows desktop replacement. So don't use it if it offends you, that's OK. I don't think this is all that forceful really.
They already did UWP-only Windows. That was WindowsRT. Nobody bought into that either, but Microsoft still really, really wants that closed-market, walled-garden style business model to happen somehow and so they're going to keep trying.
I believe that their next strategy is to release a full Windows OS (with Win32) that runs on ARM which lets Win32 apps run via an emulator. In this environment, Win32 apps will most likely run like crap and as a matter of course Microsoft is going to keep pushing everybody (developers and users) toward the model that they think will make them the most money.
It's really no problem for my career because I stopped exclusively using Microsoft's tech years ago and I'm comfortable using Linux or a Mac if necessary (but really, I can't stand their UI and I think Apple is even worse than Microsoft in many, many ways). I just like the Windows UI the best and I'd like to be able to build stuff for it in the future....but not if they are working towards taking away my freedom to control my own computer.
Microsoft famously keeps trying, and often the product initially looks like a failure until the third iteration is the one that clicks. (Windows 3.0, Surface 3, Windows NT's third version which was branded "NT 4.0"...)
Windows RT was about as popular as Windows 1.0. I think Microsoft views the upcoming ARM-based "Cloud Windows" (whatever the branding is going to be) as the second iteration of this idea, and hopes to really gain traction a few years later.
Care to elaborate on the HTTP part? You just need the "Internet (client)" capability in your manifest, it's even enabled by default when you create a new project
Moreover, win32 desktop apps can use the AppService mechanism exposed by UWP apps (but cannot host an AppService themselves, just connect to UWP-hosted AppServices)
Give it a try. You can't send requests to `localhost` or `127.0.0.1` at all.
AppService is largely useless for IPC between UWP and Win32 since the UWP app can't initiate a request to the Win32 app.
Another thing you can't do, which is really annoying: UWP won't let your code start an external process and use standard i/o pipes to communicate with it.
The only possibility is to use files for IPC and you'd have to have your Win32 app listen for changes in the file or directory, then open it up and read the data. However UWP can't listen for changes in those same files, so it's a real PITA.
That will work since it is treated like all other network traffic. localhost and 127.0.0.1 are considered privildged routes since you can bypass the firewall and exploit local processes to escalate.
Hey! I didn't implement the loopback restriction in WinRT, but I sure was one of the PMs in the network team when it happened.
As it turns out, pretty much everyone on the thread is right. Localhost is special on a network: it's the only machine that that you're guaranteed actually exists, and is therefore qualified as a high-value target for the bad guys.
I hadn't heard about the printer creating a local HTTP server, but it sure doesn't surprise me one bit.
At the same time, we could identify companies with real products that we wanted in the WinRT ecosystem but where the company already had an architecture that assumed that they could connect to localhost. It was a very painful call, but in the end we decided that security was more important than these companies.
We decided against a checkbox because users already have too many checkboxes, and mostly don't understand any of them.
If knowing the current local IP address gives you access to localhost, that would be a bug. More likely, you see that it works because you're running in a debugger, where the access to localhost is restriction isn't currently enforced.
And for goodness sakes everyone -- stop restricting yourself to IPv4! IPv6 is a real and growing thing! We made a bunch of fixes in Windows just so that the WinRT network APIs would work perfectly without ever knowing or caring whether you were going over IPv4 or IPv6.
> We decided against a checkbox because users already have too many checkboxes, and mostly don't understand any of them.
How did you decide that it was too many? Also, if users don't understand any of them, what's the big deal about one more checkbox?
This was a bad decision IMO and it's the kind of thing that is causing nobody to use UWP. There are so many stories out there of people trying to use UWP for their project and then turning back when they realize how restricted the sandbox is.
Honestly, doesn't it seem a bit absurd that UWP apps can talk to anything else on my network except for Win32 processes on my own computer?
If Windows Phone were a thing, I'd probably be building UWP apps already. Mobile devices are the only place where I accept these types of restrictions, but it's mostly because I have no choice. If Microsoft comes out with a phone that can run Win32 apps, I'd switch to that in a heartbeat because then I could get full Chrome with extensions and I'd be able to do things like run uBlock, quick javascript switcher and all my other favorite extensions. (Yay freedom!)
Because how would an ordinary person every make the right choice? How would any kind of anti-virus every tell the difference between a "good" program that's connecting to an embedded web server in an appropriate way, and a "bad" one that's intended to take over the system?
Most other checkbox security choices at least can be explained. A word processor probably doesn't have any legitimate reason to use Bluetooth (for example), and therefore a customer has a chance of making a reasonable choice.
But for localhost access -- my word, there's no rhyme or reason for it. As a simple example, I worked on a statistical package back in the 90's (yay RS/1!) that was implemented as two programs on Windows. One was the GUI client and the other the statistical server. There's nothing about "statistics" that obviously screams, "must have localhost permissions" :-)
Because you currently ship systems with firewall that is unable to filter outbound traffic per application. DnsCache bypasses filtering by effectively providing tunnel between localhost and external network interface. Maybe you know of a way to selectively limit access to DnsCache per application? effectively whitelist a couple and block the rest?
Ordinary person argument is flawed. So called ordinary person doesnt even know what a network card is.
OK, so ContentChanged is new since the last time I looked - thanks for that! However, IPC via files is still a PITA in general compared to something easy like sockets.
Using an external IP address is a terrible solution for communicating with another process on the same machine. As soon as you disconnect from the network, it will stop working.
What does IPC between UWP and native buy you? Do you have some legacy code that has to run natively (been there with a Java app many years ago using TCP to talk to a tiny host process for some native code)?
No the author, but I can see a use case where you have already existing "services"/"daemons" with your client Win32 app. If you want to rewrite the client from Win32 to UWP, it seems that you wouldn't be able to?
Why do that? Indeed one could just call Windows 10 API from its existing client [1], but it seems that you would miss out on the easy install/update from the Store and other benefits a UWP app provides.
Wow, it's weird I never actually tried to send a request to localhost on UWP, usually only to other PCs on the network during development. I didn't know that.
The Win32 app can connect to the UWP AppService, on PC you can also bundle a desktop app in your package and start it when the UWP app is launched. This desktop app can then act as a bridge between your UWP app and the "legacy" world, offloading the "dirty" stuff to the slave desktop app. It actually works, I used this for getting the taskbar to blink in my side-project for a messaging app.
I think he was talking about programming the apps, not just using them. From the looks of it (after reading the comments here and nothing more, so excuse me if I'm completely wrong) it may be a real inconvenience from the dev perspective: I can somehow understand the "no 127.0.0.1 communication" policy, but I expect to communicate with a spawned process via stdin/stdout pipes.
Old style IPC leads to information leaks and is another attack vector.
You cannot do that on Android for example, Google explicitly removed UNIX V IPC from their Linux fork. You are expected to use TCP/IP or Android RPCs, assuming the app has android.permission.INTERNET permission.
> You have the same issue in other sandbox models.
Would what PC-BSD does with jails qualify as an exception?
> So people are complaining about Microsoft adopting what is already best practices on the other desktop/mobile sandbox models.
Now I'm curious - weren't people complaining when the other sandbox systems were designed/created? It really looks a bit inconvenient (I'm referring to the SO explanation of what Android Chrome does), so I'd expect some reasonable opposition. It could be that with time people got used to the restrictions and don't complain that often anymore.
To be honest, I like PC-BSD model, but that's probably not a good idea for platforms where the resources are constrained. It works ok on the desktop, though - I didn't work with PC-BSD itself for long, but I did the same with Docker on Linux (for web browsers) and the performance hit wasn't that bad IIRC.
As for the other sandbox models, the only people I have seen complaining thus far, has been in sites like HN, I never heard any of our customers complaining about those restrictions, or caring about them.
IIRC iOS lets you connect to other apps on 127.0.0.1 just fine. But it won't work in the obvious use case of a foreground client and a background server without additional hacks, because the server can be suspended and the connection attempt alone won't wake it up.
I see them benefiting immensely. A big problem that Microsoft has is that once you start going down rabbit holes on Windows the number of people out there who have a clue is vanishingly small. So for systems programming on Windows there is very little coverage online. Opening Windows up will broaden the number of people who understand the platform on a deep level.
Earlier in my life I did a lot of Windows system-level programming, including some minor driver development, and now I'm practically exclusively on the #nix bunch, sometimes doing similar stuff.
I can't say that there is a disparity. If anything, the parts that are documented are usually better documented in the Windows world, and there are very good books about most components. (To be fair I never read a book on anything unix apart from Lion's). There is a large amount of stuff online for either, but I often have the feeling that correct examples are less frequently found for, esp., Linux.
I've never done kernel work on Windows but on Linux at least you can dig into the source code to figure out how stuff works when the documentation is lacking. I suppose that's what the parent was talking about.
Not giving the source code forces you to write a better documentation though. In an ideal world, both would be available but personally I prefer a good documentation rather than source code diving.
Windows System Internals are probably the best documented in the world, MSDN alone is a treasure trove of information and there are countless books on the matter.
And unlike some other platforms what is written tends to match reality, and stay relevant for a long time (for better or for worse).
Sun is a beautiful case study in what happens when you're a little too aggressive about the "openness" mantra. They bet the farm on SPARC and lost. If they had kept pieces of Solaris closer to chest, who knows what would've happened, but they would've at least had one more potentially-viable commercial offering.
I assume that Microsoft is not going to be naive enough to do the same.
IMO Microsoft's interest in openness revolves around remaining a viable option in the cloud era. They need to protect their investment in the .NET ecosystem and MS-centric development workflow, which means making it easy for people to run .NET programs on *nix-based VPSes and containers.
AIUI, Sun bet on being able to sell stuff (not just SPARC) to FinTech firms. They borrowed money at the height of the bubble, and when the bubble burst, their customer base cut back on hardware purchases. Because they couldn't service that expensive debt, they had to find a buyer.
I have no idea why you would think that, when they refuse to document what gets communicated back to Microsoft, make it impossible to turn off telemetry for home users, and even refuse to let you postpone reboots after updates.
It is wishful thinking to believe that "they want to be more open, but just can't". They are open sourcing only where they are weak, and that's just not true of the OS (or Office or SQL Server, for that matter).
Windows open-sourced? Maybe an older version (you can already get the kernel source for XP/2K3 under a restrictive license, but nonetheless you don't have to be an MS employee for it) but I don't see MS open-sourcing anything newer for the forseeable future --- otherwise I bet someone will just make a fork with all the telemetry and other invasive features removed, which certainly doesn't help MS.
It's not like they're doing it out of the goodness of their heart. They don't have a choice, they were becoming irrelevant, and now they're playing catch up to try to bring back all the devs to their platform.
MS was definitely not becoming irrelevant. The vast majority of enterprise software is written in .NET and the MS stack runs millions of businesses.
They are evolving the framework to be faster and cross-platform based on external pressures but in the last few years they have managed to overtake several of the other language/framework options to again be one of the top choices for new projects.
"""The vast majority of enterprise software is written in .NET"""
I'm only familiar with one domain of enterprise software and that's ERP software. I'd like to see a %age number for "vast majority" but let's just say I'm struggling to think of ERP software based on the .NET stack outside of the Microsoft products (which are not exactly a success story).
SAP is based on Java/ABAP, Oracle is based on Java, Salesforce is mostly Java (and those already cover >50% of the worldwide market). I think Infor uses both Java and .NET but maybe someone else can chime in.
About the only .NET shop I can think of is IFS.
Zero of the FLOSS ERPs that I know use .NET for obvious reasons (mostly Java as well, some Python and the occasional Perl etc.). Maybe some will in the future but who knows. I'd also argue that most new ERP will likely be written with a "web-first" mindset which might mean .NET but is more likely to mean more traditional web stacks (imo)
Microsoft products are certainly used all over the place (most notably SQL Server in SAP products) but I think for this sub domain Java is king. Since ERP is a decent chunk of enterprise software I don't think it's likely that a "vast majority" of enterprise software is indeed based on the .NET stack.
> I'm only familiar with one domain of enterprise software and that's ERP software.
That's the issue. I'm not talking about ERP (or any other) software offered by vendors. Many corporations build their in-house line of business/ERP/backend apps in .NET.
> The vast majority of enterprise software is written in .NET and the MS stack runs millions of businesses.
What data are you basing this statement on? I'm genuinely curious because the vast majority of indicators I've seen clearly show that Java is by far the #1 enterprise language. Greenfield projects are probably far more likely to choose .Net but for now, there is a hell of a lot more important legacy Java code that will never be practical to port.
I believe that C#/F# are the future of managed enterprise languages (Roslyn, CoreCLR/FX, and quite soon RyuJIT are far superior to Java equivalents, imo) but I highly doubt their adoption is anywhere near Java.
Decades of combined experience between me and about a dozen of my developer friends working in F50 companies. Java is definitely common as well, but .NET has an advantage because of Windows, Office, Exchange, Sharepoint, Active Directory and more, along with the ecosystem of C#/VB with Visual Studio for development and SQL Server for database needs with all of its analytics and reporting services.
Agreed, but is it a bad thing for the OSS community? They're generally using unristricted licenses, MIT and Apache2, so this is legit OSS.
While I still haven't bought an MS product since 1996, I don't think I have a problem with using their OSS. I see it as supporting companies when they do the right thing.
Honestly Apple's tightening up of the control on their platform in recent years, like requiring LLVM bitcode with apps on some of their devices which only works if it's Apple LLVM generated bitcode, has me swinging the opposite direction from them.
It would go a long way for my perseption of Apple if they either drop the LLVM bitcode requirement, or at least standardize on a non-proprietary version.
I am using a time-series db where each metric measurement requires some static context information to be included, e.g. Event 1 happened in Device A at Node B, Location C. These static entities (A, B, C) are perfect for graph vertices. But, when I am persisting the graph on disk - I store all the data statically, No links just columns - the static entities become dimensional data.
Is there a better way to this ?
Ideally, we should persist the Object Graph directly without any data conversion. I see a difference in - How we compute and How we store data.
Not specific to GraphEngine, but you could store all meta-data about the device separate from the time-series itself. The composite metric name (key?) would then include the measurement name and the entity name (device name in this case).
Storing time-series in a columnar format (blobs o time-values) and entity data in row format sounds good to me.
If you look at data historians (PI et al), that's how they lay out the data: metrics storage is separate from the asset store, or Asset Framework as it's called.
This era of open source from MS is great, but my reaction is always "OK, here's the Microsoft version of something I've been using for a couple of years already".
I'd like to see the Spanner+Cyc GIS-capable global distributed real-time graph engine with FPGA accelerated OLAP support MS Research is probably sitting on, because we can almost / pretty-much hack that together with OSS now.
Microsoft of 2017 makes me forget about Microsoft of 1997. It's insane how a shakeup of CEO and new cultural shift can seemingly add another major boost to it's brand.
I welcome open source, eventually, I believe that it will eat commercial software, if the right economic incentives are in place. Microsoft may be signaling this to the market.
I started to get this warm fuzzy feeling as well. Then, I experienced a bug in OneDrive and decided to report it. A week later they responded by referring me to a site where users get 7 votes to pick which bugs you want their team to investigate. The warm fuzzies disappeared...
If I may paraphrase: "Never forget that large monopolies can wield disgusting influence, but don't let hate for past monopolies blind you to modern monopolies either" :-)
> don't let hate for past monopolies blind you to modern monopolies either
Just because Microsoft is no longer the monopoly-du-jour, doesn't mean they aren't still a monopoly...
Just because they have been making (successful) forays into open-source, does not mean they won't still fiercely protect (with whatever means at their disposal) their monopoly cash cows (Windows and Office).
The "standartization" of Office Open XML wasn't all that long ago, after all (ECMA in 2006, ISO in 2008).
Windows is a legitimate cash cow, but because of server installs.
Office makes a lot of money; SQL Server and other datacenter products make much more.
> Windows is a legitimate cash cow, but because of server installs. Office makes a lot of money; SQL Server and other datacenter products make much more.
And yet on the server Microsoft does not have a monopoly.
Consider that the main reason Microsoft has a profitable business selling server software is because of their desktop monopolies through many diverse paths.
Microsoft has changed over the years and is not the same company in 1997. It used to hate FOSS projects and now they contribute to them. MSFT used to hate Linux and now they embrace it adding Ubuntu support to Windows 10.
Sure never forget, but if you see changes for the better, learn how to forgive. If they go back to the way they were in 1997, just speak up about it.
Microsoft adapted to tablets with Surface and they seem to do well for artists using pressure sensitive pens.
> if you see changes for the better, learn how to forgive
Sure, I'll consider doing so when an Office user can set OpenOffice as the default format, or at least when they open an OpenOffice file, that Office will save that file as OpenOffice by default.
I guess this comes down to Microsoft failing basic verification.
I remember there was a setting in 2007 or 2010 to change the default document standard but I forgot how it worked.
You can set OpenOffice or LibreOffice for document default types in Windows.
I like Libreoffice because it does more file standards than MS Office. Someone emailed a Word Perfect document to a group my wife was in and MS Office could not open it. So I used LibreOffice to open it and convert it to Doc format for my wife so her group can use it.
> there was a setting in 2007 or 2010 to change the default document standard but I forgot how it worked.
TL/DR: Badly.
OK, it looks like it has been possible to set the default file types to Open Document Formats for several years, I just hadn't noticed.
But I naively thought that with that capability would come support for more MS Office features when saving in ODF, or at least that features not supported by MS Office in ODF would be disabled or hidden if ODF is set as the default to keep users from shooting themselves in the foot.
Basically, even if you set ODF as the default, if you use a feature that is not supported (such as putting a table in a presentation slide) everything looks fine until you hit save. Saving the file saves a downgraded version of your work (a table on a slide is converted to an image), and you probably only get one warning pop-up that says "some features may not be supported". I bet the still-open file continues to look and act fine. No, the problems with the saved file are only apparent much later.
Imagine the user's surprise when they re-open their presentation deck file (remember, created and saved in MS Office) the next day and many aspects of their work on that presentation were never saved or have been downgraded to uselessness...
And that doesn't even address the fact that many of these "unsupported" features are ones that ODF actually supports perfectly well, but that Microsoft just hasn't implemented for saving in ODF.
My, what an exceptionally lovely user experience that is. It just oozes good faith and sincerity of effort toward interoperability, doesn't it?
everything you see changed in Microsoft was done by the people, all people, those that work there and the community as well, they pushed things, the culture and all; those people working there are just people like the rest of the world; outside it was known the company official position through their executive voices but the people working there and the community in general helped tremendously steering the direction -- and of course the company desire to survive, to stay alive. it was slow, it is still slow, it is a tug war but it is happening :)
Nope. Same here. However, it's just Chrome on Android that doesn't recognize the CA, even the native android browser is okay. Bizarre, may be coz they have an incomplete cert chain https://www.ssllabs.com/ssltest/analyze.html?d=www.graphengi... and chrome on android has had issues with that
I've been running into the idea of computational graphs a lot recently. It's at the core of Tensorflow (and NN in general) but it also comes up for example in Apple's AVFoundation where all audio processing happens in a graph of audio units. Does anyone know what's the theoretical foundation of computational graphs?
There is the biological analogue, which has inspired neural networks:
> Recall that in our general definition a feed-forward neural network is a computational graph whose nodes are computing units and whose directed edges transmit numerical information from node to node.
> Each computing unit is capable of evaluating a single primitive function of its input. In fact the network
represents a chain of function compositions which transfor
m an input to an output vector (called a pattern).
> .. programming paradigm that internally represents applications as a directed graph, similarly to a dataflow diagram. Applications are represented as a set of nodes (also called blocks) with input and/or output ports in them. These nodes can either be sources, sinks or processing blocks to the information flowing in the system. Nodes are connected by directed edges that define the flow of information between them.
> There is the biological analogue, which has inspired neural network
I'm somewhat aware but it seems like the idea of a computational graph is the most generic computational idea I can think of and I'm surprised it's not more explored.
> Another ancestor would be the Data-Flow paradigm:
It's not just functional programming (but I do agree that it's better in functional programming). All the reverse engineering tools have some feature to represent assembly as a CFG which is super helpful.
> I'm somewhat aware but it seems like the idea of a computational graph is the most generic computational idea I can think of and I'm surprised it's not more explored.
>> Does anyone know what's the theoretical foundation of computational graphs?
Automata can be represented as graphs- that's the main idea. When you look at the typical automaton diagram with states and transitions- that's a graph (with states as vertices and transitions as edges).
I think the confusion arises from the fact that, while automata can be represented as graphs, graphs can represent a much broader array of processes and objects (e.g. belief networks or semantic networks). I guess you can represent pretty much anything as a graph.
So "computational graph" as I understand it, just stresses the point that what is represented is a unit of computation (a.k.a. an automaton a.k.a. a grammar a.k.a. a language etc. etc.) rather than some other kind of graph.
> So "computational graph" as I understand it, just stresses the point that what is represented is a unit of computation (a.k.a. an automaton a.k.a. a grammar a.k.a. a language etc. etc.) rather than some other kind of graph.
Exactly. I think that the fact that the nodes represent a unit of computation is enough for it to be different from normal graphs I think.
Funny how Microsoft has been rocking hard with their OS releases lately (and good on them for doing so), but there's still a pervasive feeling in the dev community about their true eventual motives and strategy.
"Fool me once..." I guess, or as we say in France "Cold water scares the scalded cat".
I still like the neat Pajek. Nifty little piece of software, unknown by most, but really powerful! Especially if you are into social network analysis. Who else uses it?
I see that the source code "os.h" file has directives to handle Linux and Apple. Has anyone managed to build it and use it in the even most trivial way?
At the current state you could link Trinity.C with a custom Mono host and then execute your GE assemblies/Trinity.Core. Check out CMakeFile.txt for details. :)
We're working to interface Trinity with CoreCLR. .NET standard 2.0 would make this much easier.
Since you've done this repeatedly and ignored our request to stop, we've banned your account.
Actual astroturfing, when it occurs, is an abuse of HN that we crack down hard on. Defending this community against gaming and abuse is a huge priority for us. Any user who thinks they might be seeing it happen on HN should email us right away (hn@ycombinator.com) so we can investigate.
Imaginary astroturfing—the bug that causes some users to be certain that those who disagree with them can only be nefarious shills because otherwise the pure reason of their own point of view would be fully accepted—is also an abuse of HN. This one is orders of magnitude more common, and it is poison. It eats away the heart of civil, substantive discourse, the assumption of good faith on the part of others.
Therefore we ban astroturfers, and we also ban users who accuse others of astroturfing or shilling without evidence. An opposing view does not count as evidence, and playing this card as a rhetorical device in an argument breaks the HN guidelines.
This shouldn't be downvoted. There is definitely at the minimum Microsoft employee vote brigading at this point. HN needs to start making upvotes, downvotes, and time of vote public.
Accusations of astroturfing or shillage without evidence are a serious breach of civility on HN. Since you've done it before and ignored our request to stop, we've banned your account.
This is not a cuddly new Microsoft. First comes the embrace (look at all this stuff on our github!), then comes the extend (run your Linux stack on Windows and never have to give up Visual Studio!), I'm sure you know what comes next. Hint: PC manufacturers no longer have to give you the option to disable Secure Boot.
[1] https://www.graphengine.io/docs/manual/index.html#what-is-ge