JS and Python are slow [see below] and heavyweight. Lua might work, I don't know... but this being header-only makes it pretty nice for simpler situations.
EDIT: Sorry, I wrote this a little too quickly and ended up writing a bit too coarse of a description. I meant JS and Python are both heavyweight to include in a project, and Python is slow both to initialize and to run. I didn't mean JS is slow to run. I don't know how fast it starts up either.
> It is ridiculous that header-only is considered an advantage.
Why do you believe it's ridiculous? Being able to integrate a third-party library by just adding a few source files to your source tree is as simple as it gets.
> The state of C++ build tools is very poor, and it is harming the language as a whole.
This assertion makes no sense, particularly in the light of this discussion. Installing a headers-only library is a solved problem, and even template-heavy libraries such as Eigen are already distributed and installed quite easily with standard linux package managers.
I should clarify. Being able to add headers to a project in C++ is easy but adding translation units is not (usually). This encourages header-only libraries even when they are not really appropriate, increasing compilation times etc.
Other things that become a huge albatross and are thus avoided:
- Adding compile-time build steps, e.g. for code generation.
- Adding dependencies of your own, even on, say, a tiny library of helper functions – unless you want to copy and paste it into your header.
With a package manager, those things 'just work'. Package managers also make it much easier to update to newer versions of the library as they're released.
I don't see your point. Adding custom build steps is a basic feature that's supported by pretty much every single popular build system for decades now, just like adding your own dependencies. Heck, cmake even allows users to configure a project to download packages from the web and integrate them in a build and with custom build steps if needed. Even if we ignore this fact, there are also package management tools such as Conan which handle this case quite nicely and also support cross-platform deployments.
And let's not pretend that in some platforms such as pretty much each and every single popular linux distro already package and distribute C++ libraries and offer packaging tools and also package repository services to distribute any dependency.
I'm starting to suspet that those who complain about these sort of issues have little to no experience with C++.
If that were sufficient, we wouldn't see so many header-only libraries.
> Even if we ignore this fact, there are also package management tools such as Conan which handle thjs case quite nicely and also supporting cross-platform deployments.
Conan would be a decent solution if everyone used it. I tried it briefly last year; I ended up not using it for that project because it didn't support certain features I wanted (namely, compiler toolchain management), plus Bintray was having issues at the time. But my overall impression was that it was... fine. I may end up using it in the future.
For now, though, as a library author, most of your users won't have Conan set up; as a library consumer, most of the libraries you use won't be on Conan; and in either position, most people building your software won't know how to use Conan. Until that changes, it doesn't really solve the library friction problem.
> And let's not pretend that in some platforms such as pretty much each and every single popular linux distro already package and distribute C++ libraries and offer packaging tools and also package repository services to distribute any dependency.
If you have a library which is popular or a dependency for something that's popular, then yes, the N different Linux distributions and package managers for other operating systems will all handle packaging your library themselves. If you just want to be able to upload some code and have it be immediately reusable by the wider world, well, it's not particularly feasible to make packages for N different distributions yourself, and even if you do, the Linux distros most people use are months to years behind the bleeding edge of software releases, so you'll have a while to wait before those packages actually reach people.
Sometimes I wonder why projects are either "single header only" or tens to hundreds of separate modules. One header and one source module would be a nice compromise regarding compile time vs. ease of use.
Yes, I realize that some header only libraries support using the header as either header or implementation, but this still blows up compile time.
Does it really blow up compile time? Translation units and optimization are usually where compile time is spent. Having fewer but fatter translation units helps compile times tremendously while most of the time per translation unit is spent in stages beyond the source compiling in LLVM.
Boost is really the only example I can think of where small utility comes at the expense of huge compile time increases.
> I should clarify. Being able to add headers to a project in C++ is easy but adding translation units is not (usually).
Where do you see a difference?
>This encourages header-only libraries even when they are not really appropriate, increasing compilation times etc.
Thus assertion makes no sense. Headers only declare interfaces, and you only require headers-only libraries if you're deep in template and template metaprogramming land. Evenso it's quite trivial to package and distribute those libraries just like any other library
It's not an absolute advantage, it's just an advantage for some use cases. I certainly wouldn't mind a better build system that didn't make it less advantageous, but it's a hard problem...
I'm not sure about JS being necessarily that slow... The browser engines have done tremendous work in the last decade to make it very performant. It approaches java in some regards... It would be interesting to even see applications where this difference would be noticeable, let alone application examples where there would need to be so much complex logic in scripting itself, and also so much sheer computation requirements where this would ever make any difference...
Also then, if this would be the selling point of ChaiScript, I would want to see some benchmarks (I have a hard time believing it will outperform lua with the built-in JIT or one of the faster JS engines with optimizations (it's not like performance is a novel desire in embeddable scripting languages, tons of manhours have been put into them already))... The website doesn't seem to have any such benchmarks.
Now consider that you'll generally be doing a LOT more on initialization than just importing these packages, a lot of packages end up being extremely heavyweight, and very likely you'll be calling Python several times in a row... it can easily waste seconds on startup, even on Linux.
First off, WSL and MSYS2's startup time isn't really fair to compare, since if you're running on windows you're probably going to be running native python. Especially if you're using the kind of application that has scripts/plug-ins. I don't currently have a windows machine to test that, but I imagine you do since you just timed it.
I also don't agree that 10ms is ridiculous. That's barely any time at all. That's less than a frame, at 60Hz. And, how many interpreter instances are you going to start up? Surely, no more instances than you have cores; that means, worst-case scenario if you have a ryzen threadripper or a maxed-out POWER9 (both have 96 cores), a second. For most people, it's going to take under half a second; considering most programs take tens of seconds to start up, I think that's fine.
Not to mention that if you have 'complex initialization code', that's going to take a long time regardless of language; what we're testing the the language core baseline startup time.
WSL is fair, MSYS2 isn't. That's why I took out MSYS2 as I was revising my comment. The timings I have now are just WSL and plain, native Windows.
And the number of interpreters you spawn isn't a function of cores either. It depends entirely what you're doing, and nobody was talking about parallel execution. If you run a bunch of Python scripts in sequence you'll multiply the startup overhead...
I guess we'll just have to disagree on 10ms being ridiculous for a program that does nothing. To me it is. And half a second is orders of magnitude more so.
P.S. All of this is my timing on a pretty darn fast CPU. Try running on a more typical laptop CPU, switching to battery, and you'll get even worse timing.
If you're running a bunch of scripts in sequence, you should have a single interpreter state you use for running all of them. I would do that even if startup were effectively free.
And yeah, I have only a crappy laptop and your benchmark takes more like 50ms/spawn for me. Which I still think is reasonable because there's no reason to make a new interpreter state for ever script.
Your question was "how is Python slow?" where you were apparently arguing over the slowness of standalone Python.
For an embedded application, of course you'd try to keep the interpreter state if it's possible. Sometimes you can't because of global state changes; sometimes you can. And sometimes, even with a single interpreter, the rest of the C++ program is fast enough that the embedded Python initialization time would dominate its running time, rendering the standalone vs. embedded distinction moot.
In any case... all you're doing is amortizing slow startup cost over the lifetime of the program. The startup is still slow. You were wondering "how" the startup is slow, so I wrote you benchmarks and ran them to illustrate. If what you really wanted to argue was that startup time is irrelevant, maybe you should say that instead of asking "how is startup slow" and making me spend my time writing a benchmark for something you don't care about.
By the by, why does your python example import a bunch of things, but the php code imports nothing? That would negate the entire purpose of the experiment, no...
Because, there wasn't anything to import?! preg_match(), getopt(), system(), etc. are already available... if anything, you should be asking why I'm unfairly penalizing PHP by comparing it against a no-import Python!
And I did give you an example of Python that doesn't import anything, and as you can see (and could have easily tested yourself) it was still a lot slower...
Python not importing anything is the same speed as php not importing anything on my system. Oh, and--incidentally, javascript is 20% faster than php and python both, and afaik you don't have to import libraries there either.
I'm indeed aware, but it doesn't change the conclusion, and I was trying to keep things consistent with the parent comment. All that does is reduce 89ms to 55ms.
EDIT: Sorry, I wrote this a little too quickly and ended up writing a bit too coarse of a description. I meant JS and Python are both heavyweight to include in a project, and Python is slow both to initialize and to run. I didn't mean JS is slow to run. I don't know how fast it starts up either.