Hacker News new | past | comments | ask | show | jobs | submit login
Plywood: A New Cross-Platform Open Source C++ Framework (preshing.com)
196 points by petercooper on May 26, 2020 | hide | past | favorite | 71 comments



> Most open source C++ projects are libraries that are meant to be integrated into other applications. Plywood is the opposite of that: It gives you a workspace into which source code and libraries can be integrated. A single Plywood workspace can contain several applications – a webserver, a game engine, a command-line tool. Plywood simplifies the task of building and sharing code between them.

I think this works as a good overview.


Well, that is implied by the title: "Plywood: A New [...] Framework". Frameworks are by definition about providing the core elements to build your stuff on top of them[0].

[0] https://en.wikipedia.org/wiki/Software_framework


It has been my experience that most software posted on HN has little description of what it does, and no description of why it exists. The parent comment was highlighting the excellently concise mission statement written on the first page. I wish every project would do as well.


Agreed. It seems that people love to just throw around “library” “language” and “framework” now, but not know the differences.


The word "framework" by itself provides no indication of how specialized the tool is, or what problem(s) it aims to solve.

The above description indicates that it is a very general-purpose framework (handling things as disparate as game engines and web servers) and that it aims to enable code reuse, not just eliminate boilerplate code.


Eh, no, that description doesn't do that, a better description is what is above it because it actually mentions what it does and provides. The part quoted in the message i replied is largely implied by the word "framework". Of course it isn't a bad thing to be explicit (especially since some people may think that framework is just another word for library), however it isn't any clear about what the framework does.

For example think of MFC: you know what that framework is all about (Windows GUI applications), yet the description could easily fit with MFC (and indeed all of the above have been implemented on top of it).


So unreal engine, blender and visual studio? All talking to each other?


Actually, you could probably argue that Blender already is these four things (a framework, blender, unreal engine and visual studio (the latter two as part of the game engine(defunct) and editor for example)).


So basically sort of like a monorepo.


Library from authors Arc80 game engine. Author appears to significant game engine experience:

I’ve worked in the game industry for 14 years. Until 2015, I worked as a Technical Architect at Ubisoft Montreal, on franchises such as Rainbow Six, Child of Light and Assassin’s Creed. Before that, I spent a few years developing desktop graphics software at Corel.


Framework... for what? Turns out it's a framework for writing games, graphics, sound, that kind of thing.


From the article: "Please note that Plywood, by itself, is not a game engine! It’s a framework for building all kinds of software using C++."


It seems pretty clearly focused on writing games and similar doesn’t it? Given those are all the examples.


> Given those are all the examples

https://github.com/arc80/plywood/tree/master/repos/plywood/s...

A video renderer, audio synthesis, parsing, and web server. Linked right from the article. The first is even the first demo in the article.


Apparently it includes a partial C++ parser too.


This framework has the potential to become another Qt if UI is been added. The coding style and document[0] are solid and similar with Qt[1]. Giving the Qt is heading in an unpopular way[2], I am looking forward to this framework.

[0] https://plywood.arc80.com/docs/modules/runtime/api/string/St... [1] https://doc.qt.io/qt-5/qstring.html [2] https://www.qt.io/blog/qt-offering-changes-2020


I see often, including in this case, that GitHub contains only a squashed version of the development history. Assuming that the developer keeps working on top of the original branch with full history, is there a way to keep a squashed and a not-squashed branch in sync automatically?


When you say “squashed” vs “not-squashed”, are you referring to the “squash” merge feature in Github?

If so, that turns a PR branch into a single commit onto the target branch (master). Unless the old branches are kept around, the unsquashed commits won’t be available.


I'm referring to the squash feature of git rebase. The developer collapsed years of commits into one before pushing to GitHub, I am wondering if there is a way to keep working on top of the original history privately, and pushing to the "clean" history on GitHub, without having to cherry-pick all new commits for GitHub by hand.


I guess the "easiest" solution would be a clean and separate "github" directory where no work happens and where you just cp all your changes from your actual working directory into.

If they version it, I could also imagine using master as an orphan branch and just cherry-pick the changes between two tags or commits into it.


git replace¹ should be able to do it; otherwise, grafts² work too.

¹ https://git-scm.com/docs/git-replace ² https://git.wiki.kernel.org/index.php/GraftPoint


That is it! Fantastic, thanks!

https://git-scm.com/book/en/v2/Git-Tools-Replace gives the exact example of what I had in mind.


In this particular case it seems justified as the framework was part of a bigger project. So probably the majority of the commit history would not be relevant to the project in its current state.


Title needs an edit to clarify what kind of framework this is.


from the examples, I would say, the type that includes the kitchen sink.


But my kitchen sink just has dishes in it.


I usually wouldn't care about yet another c++ game framework, but this author's prototypes are so impressive and artistic that it makes me believe he's doing something more than just playing around with technology.


Browsing over the code I'd say it's solid and seems to have been used in production. Just about the right amount of abstraction for practical use.

Some parts could be replaced by standard C++ library functionality by now. My biggest issue so far: Use of bare pointers, especially in parser code. This should not appear in new code written in 2020. I'm kind of baffled by that as he seems to use stuff like std::move.

This is no replacement for either Unreal or Unity for sure, it's pretty basic in functionality.


I think "no bare pointers, ever" is overly dogmatic. Raw pointers still have a place, especially in game programming. The standard smart pointer types cover many common situations, but do not cover all situations (nor do I think were intended to).


> The standard smart pointer types cover many common situations, but do not cover all situations (nor do I think were intended to).

They also add a level of indirection, which can manifest as cache misses. If you're iterating through large numbers of objects, memory locality can be a huge gain.


> They also add a level of indirection,

neither shared_ptr nor unique_ptr add any additional indirection compared to raw pointers.

Of course if you can you should use inline value types, but that's a different thing. The parent was talking about replacing raw pointers with smart pointers (personally I have no particular issue with non-owning raw pointers).


This isn't a question of smart vs raw pointers, it's a question of pointer vs value. A std::vector<T> is considered good code in modern C++ and should often be preferred over a std::vector<std::unique_ptr<T>>.


> it's a question of pointer vs value

Yeah, the indirection I was referring to generalises to this.

Edit:

You might be saying that STL data structures could have been used by the author to alleviate the memory locality issue, but as noted in another thread, it's common to use custom STL implementations or use entirely custom memory management in performance critical applications, to reduce on memory allocation frequency, memory usage and/or fragmentation. Or maybe the author is just more 133t than thou.


Yeah but I read a thing somewhere that it's bad so I'm gonna tell everyone not to do it cos I'm clever.


I think it's interesting to look at his code. I have not worked on games myself, but I've understood that most big game studios write their own versions of STL containers, in order to have close control over each and every CPU cycle spent, and in order not to get burnt by changes in the STL, or by slight, or not so slight, variations in performance between different implementations of the STL.

I see that he does the same, keeping his own lightweight string class implementation, for instance, and also his own lightweight smart pointer implementations (Owned and Borrowed). I suspect that he also uses bare pointers and new for similar reasons.

It would be interesting to know if he thinks it is necessary, or if he could have used std::string, std::unique_ptr and std::make_unique instead in this framework.


How do you tell that code has been used in production by browsing it? Game programmers aren't scared of pointers really are they?

Also as stated its not a game engine so why would it be a replacement for unreal or unity?


> I’m releasing part of that game engine as an open source framework

I'm curious what this means exactly. Does it mean that requires the non-open source part to function? Or is it a standalone part of the non-open source engine that can be used by other projects (like a library)?


It's the latter: A standalone framework that can be used by other projects. It's like a library (or suite of libraries) with separate modules for platform abstraction, containers, JSON, etc. A bit more "batteries included" than vanilla C++, and you only link with what you use. These modules are organized into a workspace that helps set up new build pipelines, to compensate for the lack of a standard build system in C++.


My guess would be it's the parts of the game that are generally useful and polished enough to be used as the base for other games (or non-game applications).

Games often have parts that don't quite belong to the engine layer, but also not strictly to this specific game. It might be small adhoc helper modules, glue code to other 3rd party libraries or similar random stuff.


Does it include UI components? Or is it designed to be used with third-party UI components like QT?


No.

It looks to be focused on projects that do not use any UI toolkit at all. CLI applications and games, basically.


> Runtime reflection is, in my opinion, the biggest missing feature in standard C++.

Opinions?


It is a big pain when you need to implement stuff like serialization or data type mappers for stuff like RPC or database access.

However with template metaprogramming and constexpr if there is already quite something when can do into supporting it, today, specially with C++20.

Post C++20, there are ongoing plans to have support for static reflection at compile time, and metaclasses.

Microsoft demoed the current state of their VC++ prototype at Virtual C++2020 days.

"Dynamic Polymorphism with Metaclasses and Code Injection"

https://www.youtube.com/watch?v=drt3yXI-fqk


Strictly speaking, you can do it without that stuff, here are some snippets of a test i was doing in Borland C++ Builder 1 (from 1996, that is pre-C++98) a few years ago [0] (not using any of BCB's C++ extensions) which end up with this [1] (it is a bunch of controls thrown in a form mainly to test stuff, but the interesting bit is the property editor at the right).

If you do not mind relying a bit too hard on the preprocessor you can also do it in plain C [2] (i mean, the C++ approach still relies on it a lot, but the C one goes all the way :-P).

A game engine (for a big published game, not my own stuff) i worked on at the past used this approach (the C++ one) heavily and i think it is also quite common in some frameworks.

Though personally outside of experimentation i just use Free Pascal which provides the functionality as a core feature of the language (and next version will allow attaching arbitrary attributes to properties, which addresses my "i'd like to have a description for this property for the editor without writing a GetPropertyDescription method" wish :-P).

[0] https://pastebin.com/eCYs1tv8

[1] https://i.imgur.com/Ry5114y.png

[2] https://pastebin.com/8udWrNcv


Agreed, but that is why it has been an ongoing discussion at ISO, many devs are fed up with such approaches, because it is the kind of stuff that prevents C++ to have nice IDE tooling like Java and .NET.


Sorry for the noob question: can you explain how it helps with serialization or database access?

I've read simliar statements before but I don't think I understand how it helps.


The standard serialization case is basically, I have this definition:

  struct foo {
    int a;
    int b;
    const char *c;
  };
and I want to automatically generate this function:

  void serialize_foo(foo &obj, ostream &out) {
    out.serialize_int(obj.a);
    out.serialize_int(obj.b);
    out.serialize_string(obj.c);
  }
With some sort of reflection, you can automatically build that method with something like this:

  void emit_serialize_method(class_definition &clazz) {
    emit("void serialize_foo(" + clazz.name + "&obj, ostream &out) {");
    for (auto &field : clazz.fields) {
      emit("  out.serialize_" + field.type + "(obj." + field.name + ");");
    }
    emit("}");
  }
(syntax of course varies).


Yes (with large preference for compile-time reflection / metaclasses - runtime reflection can easily be derived from that) : http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p070...


And somewhat unfortunately, C++ has plan for a kind of reflection in the future [1]. But it's not here today. I'm wary of relying on a parser though: it could prove a real drag as C++ evolve and you need to keep the parser up to date.

[1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p059...


I've recently been toying with #[derive] macros in Rust.

I have to say that framing reflection in terms of deriving an implementation of an interface is pretty powerful. But C++ makes it harder to do this than Rust, because you need to mutate the class definition to insert these methods in C++, but the Rust definition happens outside of the class (as vtable pointers are not contained within the class but passed around separately).


Not necessarily so. Consider std::hash, which is a template heavily specialized by many types. Then containers like std::unordered_set use these specializations. You can totally turn a Rust trait into a C++ templated struct, and then turn Rust impls into template specializations of that struct.

> I have to say that framing reflection in terms of deriving an implementation of an interface is pretty powerful.

Just wait till you see Haskell's Generic. (Not to be confused with generics in other languages.) It turns out for most applications you don't even need to derive an implementation of an interface.


you do not need the vtable to be inside the object itself. For example c++ std::function act similarly to an rust impl trait. But the lack of static reflection (or equivalent builtin functionality [1]) means you have to build a dedicated class for each C++ concept (the equivalent of rust trait), although libraries can help.

[1] the currently non existing equivalent in C++ has been sometimes refererred as virtual concept. Once upon a time, in the 2.x era, g++ had an extension known as 'signature' which would do exactly that.


What is definitely missing is compile time reflection. I don’t think many devs care about the runtime one.


For a statically-compiled language, runtime reflection is probably a violation of the zero-cost overhead principle. There is a limited amount of reflection that can be done cheaply that has some value: getting the address of the vtable of an object [assuming it has one] for uniqueness, or getting function pointer identities for generic invokable objects.

Given compile-time reflection, providing standard library functionality for certain runtime reflection tasks (such as providing a list of field information for a struct/class/union, or listing function arguments) could be useful. On the other hand, it could end up being a locale-like scenario where it's either overkill or too weak for your use case, never in the middle.


> a locale-like scenario where it's either overkill or too weak for your use case, never in the middle.

well put.


We can anyway generate the runtime one at compile time, that is how many Java and .NET compiler plugins and annotation processors work.


Exactly, with a compile-time one it would be relatively easy to build portable runtime reflection frameworks/libs, for those that really need it.

Otherwise it would be a feature like RTTI that everyone "hates" if enabled by default.


This might come true some day, but it seems not in C++20 and in recent static reflection proposals there was no way to distinguish reflected from non-reflected members in the same class.


Yep that is post C++20.

I am hardly a C++ user nowadays, but such stuff interests me, because the Windows Development team managed to push C++/WinRT as replacement for C++/CX, but are declining any improvement to Visual Studio support (at least comparable to C++/CX) until ISO C++ gets similar capabilities to C++/CX.

So given that some C++ usage is required depending on which APIs you want to access from .NET, you can imagine there are many WinDevs not very happy with the downgrade in tooling support.

Back to your point, in what cases might such distinction be relevant?


One example is in: https://github.com/arc80/plywood/blob/master/repos/plywood/s...

Here, only one data member is meant to be serialized. The others members are there to accelerate lookups into the first data member. (Full disclosure: that structure isn't actually serialized yet, but the Arc80 Engine which uses Plywood has similar examples.)


Interesting! Could sufficiently enhanced compile-time reflection (say we do get supercharged CT reflection in C++2b*) be used to implement fully transparent&automatic versioning in serialization frameworks? I know very little on this subject but I think boost serialization, though capable of handling multiple versions of a given class, requires manually 'tagging' source files as being say, version 1,2,etc.


Thanks!


I haven't looked at the C++ reflection proposal in a bit, but I think that attributes are reflected so you should be able to use your own attributes to mark non serializable objects


Perhaps, but it's easily overlooked because every project that needs runtime reflection badly enough has already rolled their own. (Some game engines have multiple reflection systems, eg. shader parameters vs. serialization.) This framework takes a particular approach and makes it an (almost) standalone component.


Why do/did you go for runtime reflection for all that? I thought you would use it for external scripting or mods or things like that, but not for eg serialization. I guess I am missing something.

Everyone misses compile time reflection because it solves many use cases easily (like serialization) without having to go for a complex solution or incur in performance penalties of runtime reflection. In contrast, I doubt many people care about general runtime reflection which I’d expect to be a mess in C++ and most likely not as fast as people wanted.


The biggest advantage I find with runtime reflection it that it's possible to load and manipulate data that has no C++ definition at all. This is actually handy when working with 3D geometry consumed by a GPU.


Will it replace the now defunct Djinni framework from Dropbox?


Plywood is C++ only, so no.


It is not just open source, it is under MIT license... so now I am interested


Ok, I am curious about the downvotes What is wrong with my comment. Open source can mean a lot (e.g. gpl) some of which cannot be used for production (in my and some other ppls environment). This one can, which is great. Reading on mobile it took 2 minutes to find out. So I thought i spare other ppl the hassle to find out themselves. And yes I believe licensing matter, and open source is too general a term.


You seemed to be suggesting the open source might have some not good things about it. You can't do that ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: