Hacker News new | past | comments | ask | show | jobs | submit login
Blender 3.x roadmap (blender.org)
364 points by homarp on Oct 28, 2021 | hide | past | favorite | 114 comments



I love Blender, but have been stuck on version 2.7 due to what I can only describe as some sort of icon dyslexia.

For 2.8, all the Blender icons were replaced with monochromatic ones. This is a very popular trend and a lot of programs are replacing their icons in this way, so it's obviously fine for most people, and I realize this is probably a niche accessibility need I have.

But, to use the new icons, I find I have to check each one every time to find the one I want. Instant recognition is no longer possible. For me, this extra cognitive load makes it difficult to use Blender for more than a few minutes at a time.

Blender is a fantastic product and one of the best examples of what open-source can be, but I for one will be appreciating it from afar, and remaining on 2.7 until there is a way to get more usable icons.


> This is a very popular trend and a lot of programs are replacing their icons in this way, so it's obviously fine for most people, and I realize this is probably a niche accessibility need I have.

It's not a niche accessibility need, it's a universal accessibility need that's been commonly understood for decades. Insufficient differentiation is one of the factors that originally drove the increase of colour count (and later, as display hardware allowed it, resolution) in icons 30-ish years ago.

This trend is not driven by universal preference for monochromatic icons but by the cargo cult that UX has become.


This is only true if we pretend that UI design is largely about button-level optimization. Clearly it needs to work on the macro level as well, and it's not farfetched to assume that optimizing every button, icon and text label for their individual local maxima will result in an application that's overall too cluttered for anyone but the most experienced users.


This is not a binary thing. There is such a thing as an interface that is clean and uncluttered, while still using icons that are at least distinguishable from each other. Otherwise you get none of the benefits of an uncluttered interface -- e.g. you have to hover by each button and read the tooltip to find the right one, which is precisely the kind of thing a clean interface is meant to avoid in the first place.

Also, Blender is not a grocery list app. While disregarding novice users does result in an application that's impossible to learn and that's obviously bad, it's equally unproductive to optimize an interface for people who see it for the first time. Blender is the kind of application that you spend tens, if not hundreds of hours learning before using it productively (let alone professionally). That's the target audience you're designing for, not people who download it and uninstall it if they get bored in the first thirty seconds.


Sure, my comment is in response to the claim that monochrome icons are user-hostile and pushed by a cargo-culting UX profession.

Plenty of professional tools (Photoshop, Final Cut, Figma) have monochrome icons without it being a usability issue. These are all content creation tools, including Blender. The UI and content should not be fighting for attention. It's clear which of those two should primarily be on display.

I think it's entirely possible that Blender just has poor icons - but it seems demonstrably false that monochrome icons are inherently inaccessible.

My best guess is that Blenders real issue is a lack of structure and clear grouping. Providing the icons are in a logical place, it's easy enough to find the correct one, without them needing to be visually distinct on literally every dimension. But if there's no logic to the placement and the icon you're looking for could be one of thirty, then I agree that's an usability issue - just not that the icons are at fault.


There were some significant reorganizations from 2.7 -> 2.8, 2.8 -> 2.9, and again from 2.9 -> 3.0. It's upsetting to see this much churn over this short of a time for Blender.

Blender has always been very keyboard driven, but I don't use it often enough to have most of the shortcuts memorized. Even those seem to have changed, though. I'm sure there is a setting in the preferences to restore the old shortcuts, but it doesn't really feel like an appropriate thing to do with such a complex piece of software.

I know Blender has a "reputation" for being difficult to use. It used to be earned, over a decade ago, and it was indeed an impediment to getting new users. I felt like--as even just a casual user--Blender had hit a good sweet spot of organized, predictable, and powerful. It's just the very nature of 3D modelling to be complex, and it feels like most complaints about its interface these days are just from completely new people who don't understand this.

That seems to be one of the dangers of Free Software. Having no monetary barrier to entry, you get a LOT more first-time users. Without the sunk cost of having spent several thousand dollars on 3DS Max or similar, they don't have an emotional incentive to stick through the learning curve yet.


The add on market is really growing, and something like Fluent Power Tools add an amazing array of hard surfaces modeling, and easy to use.


Oh yeah. To be clear, I still love Blender. It's just that some of these changes (like different icons, moving property fields to different places, changing the default keyboard shortcuts), seem like they're coming from the wrong place. It can get hard to search for information about doing specific things in Blender and making sure the video you end up finding (cuz it's always a video, sigh) matches the version of Blender you're using.

I've spent so much time watching videos and being confused because the fields they show are not in the same place in the version I'm using. It's one of the reasons I choose to stick to the default preferences as much as possible: I don't use Blender often enough to know exactly what I'm doing all the time, so I want my system to match what other people are presenting.


I feel you, that does happen. My search usually adds the version number, to weed out that post from 2006 from Blender 2.25 :)


> all the Blender icons were replaced with monochromatic ones

GIMP did this too and it absolutely blows. Thankfully you can revert back to the "legacy" icons.


Is it a simple checkbox in the preferences dialogue or do I need to edit a config file for that?


It's in the preferences UI: Edit->Preferences->Interface->Icon Theme. There's 6 or so options in there.



So there is no way to mod the tool bars so you can show the old icons?


I mean you could always swap out the assets and rebuild. The icons are compiled into the binary so there’s no easy way to swap them out without recompiling blender.


Wow! I thought it was just me. I find it incredibly difficult to read icons. Even small design changes throw me - the most annoying was the change in speaker icon across versions of Windows. The bigger the icon the less of a problem it is but those fiddly toolbar icons throw me all the time. I have mentioned this to a few folk but honestly thought I was on my own.


I too cannot deal with monochromatic icons. FYI, 2.9 did (re-)introduce color to many of the UI icons, which made it much easier to use for me than 2.8.


It makes perfect sense, we are (most humans) visual creatures and color adds an extra dimension of information. Your issue sounds quite legit.


You could look at using a streamdeck to avoid using the icons altogether.


> The general guideline will be to keep Blender functionally compatible with 2.8x and later. Existing workflows or usability habits shouldn’t be broken without good reasons – with general agreement and clearly communicated in advance.

Oh boy. How I wish so much that other software developers would follow this principle... It seems nearly every software I use and rely on has to change its appearance and interface every 6-12 months, breaking familiarity for no objective reason, and simply because "it looks better" to look at (and not necessarily to use!) to the subjective eyes of someone.


I wish they didn't. A lot of very popular software is stuck with counter-intuitive interfaces that pose huge entry barrier for anyone approaching them. Only for the people who learned 20 year old idiosyncrasies to feel at home.


The question I always ask myself as an outsider, is this actually weird and outdated, or is it something that, once you get used to, actually makes people able to work more optimally. Sometimes those power tool design decisions are just bad old decisions, sometimes they really do enable the user. Look at Vim, not for everyone, but if you are willing to learn to invest in its crazy, specific style of user interface people can fly in a way other interfaces don't seem able to quite keep up with.


There's always path to making things more approachable no matter how powerful they are.

You could alter vim in a way that people who have years of expeirience of computer use and browser use know at least how to quit goddamn thing without googling it. Or maybe even create short text, save it, open it again, move blocks of text around. For an average person vim has 100% less utility than the notepad.

The power of vim doesn't come from obscure keyboard shortcuts. It comes from editing using parametrized commands (as far as I understand). You could make modern editor with the same power as vim where a person can just sit in front of and start working with immediately and gradually learn that things she's doing manually she could do faster using command mode. And those commands might be the same as in vim because once you go beyond area of shared intuition you can do whatever you want.

The problem of vim is that it started in the era where shared intuition didn't cover basic text editing. And this area grew since then but vim refused to acknowledge this.


Is that a problem, or just reality? One thing I think of a lot these days is the "domain community and the beginner problem". Specifically, many communities I'm tangentially related to seem to overcompensate for beginner comfort at the risk of missing the entire depth of the domain.

I do not see it as a problem that Vim doesn't supply the features you're describing, though I do understand where you're coming from. I am affirmed in my view when we remember that vim runs in a terminal emulator, and the features you're describing are non-trivial to implement in that environment.

I don't mean to sound pedantic, but I don't see this as a problem at all. As it relates to Vim, or many other tools and domains. If Vim was incentivized to increase the size its user-base, I may agree, but this is not the case.


It's only a problem if you want more people to use the power and great ideas of vim. If not then it's not a problem at all. Just reality.


You can't always build something that lets you act like a beginner and gradually transition to the power user version, or at least people haven't always found ways to make that work. The UX challenge of allowing both experiences in the same app, especially with the ability to gradually move from one to the other, is very very complex.


I agree it's hard. Though my claim is that lack of will is more of a factor than complexity of way.

If any of vim great ideas ever enters shared intuition about computers it won't be due to development of vim.

Ability to gradually transition between beginner user towards power user is natural way all modern software is written. That's why you have menus and can point things with mouse and often drag stuff around, and have a cursor and a text box where cursor keys and home and end and delete and backspace and shift works. And you have hints and indications of keyboard shortcuts. Plenty of stuff is discoverable.

You start with shared understanding and build upon that.

Some legacy UIs like Blender evolve and adapt to broadening shared intuition others like vim fail to.


Well blender just did that exact thing going from 2.7 to 2.8 recently, although it was at least somewhat justified. But now that they've reworked it it doesn't make sense to do it again any time soon.


I feel like that move really should have been a major version bump, it was a big move across the board where many aspects of functionality changed or were removed, UI locations and naming completely changed, keyboard shortcuts and UX changes across the board, and so on.

Minor bump 2.7 -> 2.8 = everything breaks, your workflow no longer functions, you have to relearn the API, online resources and documentation no longer relevant for many aspects of the editor

Major bump 2.8/9 -> 3.0 = everything is compatible with 2.8?? Just feels like 2.8/2.9 and what's referenced in the blog post should have been version 3 to me, but maybe they had some technical reason regarding the backend and scripting APIs?


I don’t have any inside info, but the vibe I got back then was that Blender just considered the x in 2.x to be the major version (with the idea that they weren’t going to release a 3.0). Though if that was true, it seems they’ve changed their minds in the couple years since.


The laws of digital physics say that FOSS jank must be conserved, they just moved it to someplace people won't mind. I'd rather have weird update numbers than weird updates.


It was a major version number change. Per the top of the blog post here, they're switching to a new versioning system with 3.x

Prior to 3.x, the major version was the .X part and the 2 was somewhat meaningless.

E.g their versioning prior to 3.x is 2.major.minor.patch and will now switch to major.minor.patch


You're right. I'm so annoyed by firefox changing its interface every once in a while instead of coming up with a good one that they can actually keep stable for years...


If Linux kernel devs had just written v5.14 first they wouldn't have needed to bother with any of the previous 90 or so releases. The fools!


The linux kernel does not break its users, save rare exceptions (security fallout).

I've rarely seen a program maintain two UIs forever when they feel like refreshing their looks

Not that I mind UI change, but I think the comparison misses the point: if it's good enough, for some people UI breaks just cost more than they gain in the redesign. So it's not about reaching perfection. It's about finding a UI only just solid enough that it can stop breaking.

I don't necessarily agree personally, but I can understand that point of view.


Everything you need to know about Linux avoiding breaking user programs

https://lkml.org/lkml/2012/12/23/75


Blender has had built-in well-supported full-featured pie menus for a long time, but Firefox still doesn't, and it's nowhere on the roadmap.


Blender is such an amazing tool. I have a side business where I create 3d printed jewelry with my customer's fingerprints on it in gold and silver (https://lulimjewelry.com) and I use Blender on the backend for all of the jewelry creation.

I run blender headless in a docker container on google cloud run. When needed I invoke it with an image and have a blender script "engrave" that image on the jewelry and output an STL file.

It is incredibly flexible to script in python, although its not very "pythonic". The UI is quite stateful (edit mode, object mode, which items are selected, etc) and you have to keep track of that state in your program. But once you get around those issues you can do quite a lot, and its all a free program!


I have a side business where I create 3d printed jewelry with my customer's fingerprints on it in gold and silver

There's a plotline for an episode of CSI crying out to be written here..


CSI: VRChat


Blender is amazing for the 3D rendering it can do, and for the fact that it is free... But it is intimidating to people who don't learn it inside and out from a production standpoint when they are doing a lot more than just importing simple objects.

But I kind of wish they had a "Blender Light", without all the features and config options, and with a less complex UI... I've been using Panzoid to do certain things, but Panzoid can't do rigging on imported object files...

I usually want to make animated videos in support of the music I release on my label, but right now doing so is either expensive or time consuming. I also don't want to put in film-studio effort or money into each music video release, because that is not a good business model, and my time is limited... The costs of being a creator are rising fast, only solid workflows will ensure survival.


You might be interested in the "Blender 101 (Application templates)" item in this roadmap:

> Being able to configure Blender for personal workflows is a key Blender target, part of the 2.8 project.

> To bring configuring even one step further, .blend files can be used to replace the standard startup file – creating ‘custom versions’ of Blender this way. These templates will allow to completely configure UI layouts, define custom keymaps and enable add-ons to run at startup. A smart Python scripter can design complete new applications this way, using vanilla Blender releases. As prototypes the team will release a couple of simple templates: a video-player review tool and the 100% dumbed down ‘Monkey Blender’.

> A template used this way will be called “Blender Application Template” or just “Blender App”. Planning is to have this working as beta for Blender 3.1, after extensive testing and reviews by contributors and module teams.

I think the bigger issue you're going to run into with "make Blender but simple" is that the subset of features that you want in a simplified blender is a different subset from what other people want. You want to do rigging on imported object files, but for somebody else rigging is a feature that would get cut out completely. They're just trying to model a doughnut and a coffee mug and do a still render of it, or at most animate the camera moving around the scene.


The added value of complex software like Blender is that it makes every task possible, reducing the risk of having to start over a project with better tools because easy to learn software with a constrained workflow and limited features was selected at first. 3D modeling and rendering is particularly suitable for this style of tool because there are many, many editing operations that can be applied to 3D models and many uses they can be put to.

Simplified software is suitable for simple, throwaway needs where the risk of choosing wrong and the cost of changing tools are low: yesterday I had to split a PDF file for the first and presumably last time in my life and I just printed it to a file by page ranges, without bothering to select, install and learn to use a PDF editor like Acrobat.

Mastering general, not "light" software is the only practical foundation for relatively professional "solid workflows" (as opposed to learning the basics of 3D modeling or something else with minimum accidental complexity); there might be a place for very advanced and/or very efficient but very specialized tools (e.g. MagicaVoxel, procedural generators of 3D models, Substance Painter) but only as an addition for equally specialized situations, not as an easy route.


> But I kind of wish they had a "Blender Light", without all the features and config options, and with a less complex UI

I get what you're talking about, sort of an "iMovie" to Blender's "Final Cut Pro". I think 2.80 actually did a ton of work in that regard, if you've seen it since then. It could definitely be simpler though, and with the UI programming the way it is I suspect it wouldn't take all that much internal change.

Unfortunately, I think a lot of the reason it's as intimidating as it is is because the main aim for buy in for now is studios who favor the power user maximal interfaces of Cinema 4D and Maya and the like


> But I kind of wish they had a "Blender Light"

Since Blender is open source, it would be possible for some other group to fork Blender and remove parts of the UI for users like you.

I really wish the core team wouldn't focus their effort on this. Blender is a professional tool for professional users. Having to balance between "dumbing down" the UI for first time users and providing a user interface for power users makes it hard to please either one of them, so I'm happy Blender currently focuses on the pro users.


Totally agree. Some time ago I also ran Blender headless with a script to render 'product shots' of a product that is available in over hundred different combinations. What is also very nice is that you can turn on API hints when you navigate over an element in the GUI. So you can quickly learn how to access or manipulate data.


Yeah you can also have it print the python command called for every action you take in the GUI


Could you point me to some documentation for this? This has been the only hindrance for me to automate things without digging into the documentation myself


You can use the `--debug-wm` flag for that. CTRL+C when hovering a button also copies it's API action. https://docs.blender.org/api/current/info_tips_and_tricks.ht...


You can either switch to the "scripting" tab or just add/change any window to the "Scripting > Info" window. I personally change the animation timeline window to the info window on the main "Layout" tab since I don't do animations.


I just wanted to say that your process sounds incredibly awesome. I was curious, what type of instance do you have running behind your Cloud Run job?

I know rendering can be quite heavy on a CPU, but it sounds like you're running a series of commands to generate a model instead


Honestly I am running it on the lowest level instance, (1 CPU, 512 memory).

And I dont actually do any rendering on this instance, just the STL generation. I use Threejs to display/render the STL once its done.


Cool setup; have you posted more about it?


No, but I have been meaning to one of these days.


That's a really cool application of Blender! I'm also using Blender headless in docker containers on Google Cloud, though not using Cloud Run. I'm doing it for self-service molecular visualizations -- "create your own molecule video". It's at https://bioviz-studio.com. But I run up to 50 dual-T4 instances for speed (Cycles X helps a lot!). And I agree the python API is a bit weird, and hard to use, but once you get the hang of it you can create really complex and interesting scenes just using python.


> my customer's fingerprints

And you get to build a database for any state agencies that might be interested in buying (or taking)!


When I started this I spent some time googling to see what sort of harm came come from your fingerprints being leaked and I couldn’t find anything concrete. It’s not trivial to take a photo of a fingerprint and turn it into something capable of fooling a smartphone fingerprint reader. Someone with the ability to do that probably also have the ability to lift your print from the thousands of things you touch in your daily life.


As opposed to the database the state agencies already have because they take your fingerprints when you get ID?


They mean pub key / certificate fingerprints. Gosh... :)


I’ll happily engrave your public key on a ring!


Huh? Is building a database of actual finger fingerprints not worthwhile?


Not really, unless you're trying to build up a private fingerprint collection which can't be used for criminal cases anyway.


> build a database for any state agencies

As opposed to building a database for private agencies?


I love how hacky this solution is (yet inspiring that it drives a business).


I have a question about your business. Do you 3d print metal yourself?


No, I partner with a casting house.

And the metal is actually not 3D printed. How it works is the design is printed in wax first, this wax is then used to make a plaster mould into which the molten metal is cast.


can ender 3 print wax?


No, you'll want an SLA printer rather than FDM. Something like an Elegoo Mars.

Though there's a bit of an issue going on in that space right now where the company that makes the controller boards for the affordable consumer models has decided to DRM lock their file format and require all of these printers to use their proprietary slicer (Chitubox), locking out the generally better liked open source alternative (Lychee).

If that's an issue for you, you might be able to find an older model that hasn't been updated to recent firmware, but personally I'm waiting on any resin printers until we see how this shakes out. There was some noise about Chitu hearing the community and creating an SDK, but Lychee didn't sound very excited about it.

EDIT - actually it looks like someone has created a lost wax casting filament for FDM printers. Look for "MoldLay". But in terms of printing detail, for small jewelry type projects resin printers are still going to be the better choice.

https://filament2print.com/gb/lay/645-moldlay.html


Using lost-PLA casting?


The rings are printed in wax and then a plaster mould is made using the wax. The gold or silver is cast using the plaster mould.

I don’t do any of it myself, I have a casting partner who can do it for me. You can also find online services like shapeways to do it as well but they charge a lot mire than the real casting houses. Some casting houses I’ve worked with were the back ends to shapeways themselves.


Could you provide the names of some of the casting houses? I've only used Shapeways for some personal projects, but would love to have some more options. Nice shop, BTW.


Sure, send me an email and I can make some introductions. (jack @ domain I posted in my original comment) The casting houses I work with are in the LA jewelry district. I contrasted them to other "online services" because these are more old school business that like to do business over the phone and receive payment via cash/check. You also don't know what the piece will cost until its cast since they charge by weight of the final piece. Most of the casting houses also do not do wax printing or polishing, so you will have to find other people to do each of those steps. Luckily the jewelry district has people who do everything it just take a bit more work to coordinate it all.


Great to see they're working on Metal support for the viewport (and Vulkan, of course). Apple took their sweet time coming onboard as a sponsor, but now that they have I hope it will give the developers all the support they need to work out the issues.


I have a macbook air with M1 and blender is extremely fast (emulated x86). If only macs had sensible keyboards so the hotkeys would be easier (not a mac fan, just got it for free). Highly recommend a Numpad though it is a pinnacle of input technology.

But even older hardware can use blender without even needing a good GPU as render previews have become quite fast. If you create models for real-time rendering, an older PC without some super GPU is quite enough to work with.


Fingers are very crossed for a touch-capable Blender.


Not quite what you're after, but Blender in sidecar works really, really well for pen-based workflows. I can sculpt and texture paint with the Apple Pencil and it all just... works.


I was interested in this workflow, would be a big motivation to get an iPad pro when the time comes to get a compatible MacBook too. Are there any limitations you've found especially with modifier keys compared to a dedicated wacom tablet?


If there was a single feature I could see added to Blender, it would be to share more functionality between objects and collections. Blender has an organizational concept called objects, which are containers with position, scale, rotation, and they contain within them the mesh data. These can be warped, arrayed, cast, decimated, etc, through the use of non destructive modifiers.

Now, these objects can be combined into a higher level organizational container called a collection. These, too, have a position, scale, rotation, etc, and behave much the same way as objects. They can even be combined into parent collections.

Now, what I would love would be to have them gain much of the functionality of objects that they don't currently have, most importantly modifiers.

Say I was building a well. I make a single brick, an object. I can array this brick object to create a wall, and I can add a corner to the wall, and add it all to a collection. Now I have this wall "object," and maybe I want to array it 8x around a center point in order to create an octagonal well. I can't, because I can't place modifiers on collections. For all intents and purposes, it is just another object, and would love to see any push towards allowing it to behave as such.


Collections and objects have fundamentally different purposes. An object's job is to anchor a bit of geometry information (e.g. a mesh) in the scene. Collections serve as hierarchical grouping and instancing. Instancing is specifically intended to not produce deep copies of objects. This is essential for creating some types of really complex scenes. Any modifier - destructive or otherwise - requires one or more deep copies to act on. Providing modifiers on collections isn't good UX because it blurs the line between what is and isn't an instance.


Yes, except that you CAN instance collections. You can do linked imports of collections and you can do linked duplicates of collections. And I understand the difference conceptually, but what I'm saying is that the distinction always ends up being arbitrary to me from and actual usage point of view. If I've made a wall or a tower out of mesh data directly, or a collection of objects, it makes no difference to me in regards to my intentions. I have a wall that I would like to instance around my scenes, and I have all the same motivations for wanting to use non-destructive workflow as I would were it an object. If I were to have to combine my collection into an object, it defeats the purpose.

Say I was building a scene like King's Landing. For each building, I'd like to have a file representing that building, that I can instance around a scene representing, say, a city block. I'll do a linked import of the building, so any changes I do to the building will take effect around the scene, and I'd like to make use of arrays and other modifiers for ad hoc instance changes. Again, my motivations for wanting a non destructive workflow are completely the same, regardless of whether that building is an object or a collection.

Now, as of now, my only option is for it to be an object. However, what if I want to use a collection of other objects in order to build the building? I want a cellar door that I'd like to have instanced. I'd like a ladder outside, and a few barrels on the second story. I want these instanced for all the same reasons. My preference, of course, would be for the building to be a collection of all these things, but if I want to then treat my building as one of these such objects at the next higher level of detail, I can't.

So you're dismissive of my intentions by insisting that the difference is meaningful, whereas I simply see it as a failure of abstraction. Objects and collections are ultimately both worldspace coordinates holding mesh data, the only difference is if they're nested at all. And like I said, collections CAN be nested. That relationship is already established. Collections CAN be instanced, that use case is already established. It's really just the modifiers that are missing. And from a programming perspective, the logic of "get the mesh data from the object I've been assigned to in order to apply this transformation" is the only thing that needs to be tweaked. Rather than going one level deep, it will say "and if this is mesh data, stop, else iterate."


I get what you're saying and it's possible to implement all that, but the current workflow is what it is for a reason. Applying modifiers to collections would result in too many UX surprises and pretty bad footguns.

What you want would be less of an issue in a tool that allowed for a more procedural approach where the assembly of complex objects is part of a modifiable construction history that is separate from the final scene graph. In such a context, merging objects is technically destructive, but repeatable. I've written such a modeler once.


You're correct in saying there's no real difference between objects and collections, but there is a meaningful distinction between mesh data and objects, in the object just being a container for the mesh. The modifiers, including array, actually work on the underlying mesh data, not the at the object level, and produce a mesh object as an output. As such, the array modifier does not utilize instancing but actually copies the mesh data multiple times in memory (both RAM and GPU memory), which is definitely not what you want with large, complicated scenes.

So it's not actually about the modifiers being able to be used for what you want in one scenario and not being usable in in a slightly different scenario; the array modifier is not really the tool for the job here at all.

The tool to be used for both of the use cases here is instancing, not the modifiers. Setting up your scene with multiple levels of collections and using proper instancing (which the array modifier is not) certainly is possible, there's just a couple of pieces of the puzzle that you seem to be missing (or unobvious hoops you have to know about, depending how you look at the UX).

Instead of arguing about philosophy and good UX design, I hope you don't mind me elaborating in a more step by step way on how to actually set a scene such as your example up.

You can spawn a collection instance at the location an "empty" type object, by selecting "collection" as the instancing type at the instancing section of object properties. You can have many of these empties and position them manually if you were so inclined.

The other thing is that you can use a mesh to spawn instances of an object at each face of the mesh. So you can create a mesh object, and set up instancing to spawn aforementioned "empty" object, that in turn spawns the whole collection, at each face of the mesh. (You do this by parenting the empty to the mesh, and setting instancing type to "Face" in the instancing panel).

This mesh would typically be rather simple, for example just a couple of disjointed faces scattered where you want your collections to spawn. Or just a single face, and mesh modifiers applied to it, such as array. So instead of arraying the collection, we array a mesh consisting of a single face, and each face of the arrayed mesh spawns an empty that spawns the collection, but the end result is the same. (No idea why each face can't be set to spawn the collection directly and we need an intermediate "empty").

You can also set instances to appear from particle systems etc, for example if you want your houses to just be randomly scattered on a surface of an object (like mesh in shape of the city or so)


Thanks for the tips, although I did know those. Instancing/array is just one functionality. What if I'd like to throw on a lattice to tweak one instance to fit somewhere better? What if I'd like to use the curve modifier?

The point is, the fact that you can do instancing through the workarounds means that there isn't a technical barrier to doing any of this. You say that it would introduce footguns to the UX, but I consider the workarounds to be footguns.


But there is a technical barrier, which stems from what the 'instancing' actually is on the technical level; uploading only one copy of the geometry to the GPU, and rendering it in multiple places, which you can't do if you want to use separate modifier stack for different copies (since modifiers are done on CPU beforehand). The instancing features are ways to set just that kind of instancing up, and as such fundamentally incompatible with separate modifiers between instances of the same geometry.

But yes, I do see the use case for collection level modifier stacks, I just don't think implementing those is exactly as trivial as you figure it out to be. Not impossible either of course (for example by doing some sort of copy-on-write thing if the geometry is modified), but it'd be introducting a different kind of copying mechanism with significant performance differences, which smells a bit footgunny to me


Have you tried geometry nodes? It lets you use meshes from any object, allowing you to build up modifiers non linearly. I would say it covers 90% of the feature you're proposing.


You basically want Cinema 4D mograph. Unfortunately, Blender does not do that at all, unless you use Animation nodes addon, which works, but it's an addon.


There's a major update coming to the geometry nodes feature in 3.0. It does a lot of the same stuff as animation nodes and even has the same lead developer.


Check out Geometry Nodes.


I have, they're super useful. They basically illustrate that what I'm talking about is entirely feasible. In the same way that you can non destructively combine object mesh data into higher order objects, this functionality should be available in the object hierarchy graph.


I do understand that this a roadmap for the 3.x series and not for the very first 3.0 release, but oh boy, is that a lot!

Improvements across the board. Is there even a subsystem that doesn't have big plans? Apart from physics, and that seems to be subsumed into Everything Nodes.


Well some things are already done. For example Cycles X will be released at the end of this year.


The 3.0 beta came out last night. The speed increase is staggering, I can easily use cycles in real-time in the viewport now!


Oh wow that sounds amazing. I didn't know what to expect but even if the increase is only 25% of what you describe it'd still be amazing.


I wish they would extract their GUI layer as some framework. It's so good that would be perfect for many programs.


That sounds like a great idea, but I'm not sure how "framework-able" a great UI it is.

I was wondering how you might apply a Blender GUI layer to GIMP (I tolerate GIMP, but I love Blender). I imagine the issues run deeper than just the surface level UI? It would be a great experiment though!


Agree. Even yesterday I was praising Blender's node editor and I'm encouraging the developers to mimic its functionalities in a product we are building.


I especially like that they plan to decouple Time from Frames. That could lead to extremely cool ways to deal with time in the future (e g. automating speed changes via keyframes to slow down or speed up time globally.


Biggest thing I’m waiting for is metal support for the cycles render engine. It’s only CPU on Apple Silicon, which is a waste on Pro/Max chips.


...or Apple could bite the bullet and just support the open, next-generation standard everyone is using. Metal is silly, if it's 'superior' to Vulkan, then at least give developers the option. Sadly it's another App Store situation, where you're forced to entrench yourself if you want basic functionality. It feels like MacOS is the new Linux, with how little software actually works on it now and how bad the performance overhead can be in many apps...


But Apple don’t need a portable cross platform API which they have limited control over and can’t accurately forecast functionality. They need something they can tightly control which grows side-by-side with their hardware ambitions/product vision. OpenGL did little to make the Mac easier to target, while supporting a standard that sunk into irrelevance, I’m sure they’re not keen to make that mistake twice.

The bonus of this approach is apple needed to directly sponsor+support blender to make it happen.


OpenGL was inevitably going to be depreciated, you won't find me crying many tears over it's grave. Vulkan is an entirely different beast though, one that Apple should support. They have the resources to make it happen, any excuses they make are quite obviously perfunctory and dismissive. Maybe Apple doesn't need a cross platform graphics API, but I do. Their 'devil may care' attitude towards backwards compatibility doesn't make me confident in owning a Mac.


The Blender x Internet project is called Meta! quite unfortunate timing


> Eevee [...] screen space global illumination

Awesome!


My GPU is crying tears of blood!


Is the 2d constraint-based sketcher getting some traction?

https://blenderartists.org/t/geometry-sketcher-constraint-so...


That's A LOT of updates to do! Great roadmap tho, I only recently started learning Blender and I'm amazed at the capabilities it has.

Can't wait for it to get even better.


i hope they eventually add more support for 2D in the style of toonboom or after effects. being able to draw with vectors, create puppets, and then animate them. 2d animation is sorely lacking when it comes to open source software.


_sigh_ ... Still no C++ API :-(


I can guess from the comments what Blender does, but I am actually not familiar with the tool.

I wandered around the website provided in the link above, but didn't find a simple explanation as to what Blender is about.

Perhaps adding such an explanation to the website can be useful.


The front page literally goes through the features of blender, with screenshots and descriptions of features:

"Modeling, Sculpt, UV Blender’s comprehensive array of modeling tools make creating, transforming and editing your models a breeze.

· Full N-Gon support · Edge slide, inset, grid and bridge fill, and more · Advanced sculpting tools and brushes · Multi-resolution and Dynamic subdivision · 3D painting with textured brushes and masking · Python scripting for custom tools and add-ons"

not sure what else they could need, it does exactly what you're asking for.


Does https://www.blender.org not make it super clear?

- Render Engine

- Modeling, Sculpt, UV

- VFX

- Animation & Rigging

- Story Art, Drawing 2D in 3D

How much more clear could they make it?


There's lots of articles posted to HN concerning specific topics, tools, etc... Not everything needs to be explained upfront, you can do your own research.

In this case, a simple click on the Blender icon goes to their homepage, which explains it's 3D modelling software.


Take a look at IanHubert's lazy blender tutorials: https://youtu.be/JjnyapZ_P-g. They're a minute long each, so not useful as tutorials. But they do a good job of showing generally what Blender can do, and how an experienced artist approaches solving a particular problem.


I was confused when I first encountered it too. It does so much that I was left wondering if it is for digital animators, for 3D-printer artists? 2D art?

It turns out all those types use it (but I believe it started out for digital animators primarily).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: