This is a design exercise, not a product. And as a design exercise, I think it really nicely demonstrates the author's ability to think of radically new designs, even if I'm not convinced by many of them.
In grad school, my advisor told me: I can't teach you to have interesting ideas, I can only teach you which ones to pursue. It's ok to have lots of bad ideas, because that's the first step to having a few good ones. As for the author, I am impressed.
While the design is new and cool, I wouldn't call "radically new". It puts together designs that exist in different contexts.
Panels are already used by tiling WM (and I believe recent versions of Android as well). Scrolling though apps with a three-finger swipe already exists too (I have an extension for Firefox for Android that does this to change tabs, it's a nice feature)
Sorting documents by hashtag was done by gmail since its beginning
By that logic, basically nothing is ever new. Almost everything can has bits and pieces decomposed and traced back to something else. At the end of the day, new ideas come by cleverly combining existing ideas.
In this case, it's mixing eye tracking, tagging system, a side scrolling array of panels.
One thing that I am fully with the author on is that the desktop metaphor is in need of an update. Many of the terms we used to soften the transition from typewriters, pens, paper, and filing cabinets are not going to be relevant for younger generations.
The hourglass had been irrelevant for quite a while when it was (successfully) employed as a UI metaphore. I wouldn't assume pens to become obsolete any time soon, either.
And whereas filing cabinets do fall out of fashion, the metaphore is deeper here - we will always arrange our possessions in some sort of hierarchical structures. Whether it's a filing cabinet at your office, or drawers and cupboards in your kitchen.
This concept is guaranteed to have underlying physical representations, even if some particular use cases will come and go. Not quite so with tagging (suggested by the author as an alternative for folders).
Tags are inherently more abstract. I don't really have a clear-cut, real-life way of associating my possessions with a tag cloud, whereby I could plausibly filter out all the #electric stuff I own, or #healthcare stuff, or both at once etc.
I think the best designs are iterative. I would agree, things are very familiar, but that is how most things gain momentum. I would honestly use this over what I have now, because things seem way more obvious to what I need to do. They are very context aware in which existing OSes are not
I think you are setting up a bit of a straw man argument. The parent comment said "existing designs." While you apparently refuting their statement, you are saying something different by jumping to "existing products." Products and designs are not the same thing.
They are both designs lol, an ipod, iphone, toaster what ever is a design. It is then changed from a design into a physical product. It's the exact same thing as a UX design of a OS, you can turn it into a OS. It's just not fully done yet.
The eye tracking part was pretty radical. I mean, it's starting to get into VR headsets but just for focusing the processing power to an area. Not specific UI interactions like this person posed. That seemed pretty novel to me.
There are lots of UI interactions in AR/VR based on gaze. It’s a top-tier paradigm in Hololens, for example. Personally I find it annoying; I want to concentrate on the content not some button or action zone. Part of the problem is that our eyes don’t work the way we like to imagine, they skip around and look at lots of different things and our brains then synthesise a gestalt view. There is a tendency for our eyes to be pointing more at the things that interest us but not _specifically_, eg at the keyboard, not at a key.
I think it is possible to use gaze as a secondary input in a UI and would love to hear examples of where it has been done because I haven’t seen any yet.
You need gaze tracking when you've got more than 2 displays. When your consciousness thinks the pointer arrow is around the content you intend to interact and your eyesight can't pick it up because it is actually 1920px away to your side it starts to feel confusing than just frustrating.
The bit about replacing folder hierarchies with tags and search reminds me of some ideas that are actually pretty old now. Back in the '80s and '90s, there was a wave of interest in replacing the traditional filesystem with a relational database, for most of the same reasons as outlined here (we can store more files than we can meaningfully organize, hierarchical organization doesn't really fit lots of use cases, searching is easier than clicking up and down a hierarchy, etc.).
None of these ever really took off, though. Mostly this was because they ran into unsolvable performance problems. Hardware has come a long way since the mid-'90s, though, so it'd be interesting to see if those ideas that were impractical then are practical today.
I think those ideas don't take off because tags don't really solve the problem adequately for normal people.
A folder structure is a self-contained map that will help guide me to the correct document. Each step has hints about where I can go next (sub-directory names), starting from most broad going to most specific. It takes advantage of both locality (similar topics are usually next to each other) and logical, hierarchical grouping.
A tag is a random thought or property stuck onto a file that I now have to remember every time I want to find that file again. I have no way to specify that tags may have a useful hierarchy of other similar but distinct items without manually coming up with a tagging system to give me that. Folders do it by default.
Basically - Folders are a good enough default. Tags might be better in some cases, but you have to be excruciatingly disciplined about using it and I find most people just end up recreating a structure that looks suspiciously like folders.
I want to have both options because traditional folders and a tag based file system solve different problems.
Let's say I go on vacation with my dog and make pictures. After I am home again I want to sort the pictures but then I have a problem: the pictures in which you can see my dog in belong into the '2019 vacation to Bavaria' collection _and_ also in the 'Best pictures of my dog' collection.
I'd love to have some sort of universal file-database where I can store all my "final" images and then create collections by adding tags.
In macOS (for some number of years now) you store files in a standard folder hierarchy, and you can add tags to files. With or without tags, you can use Spotlight (cmd-space) to quickly find files.
I think we'll start to see that happening as machine learning is used to tag the files. See Google Photos for example, where you can basically use search in place of any sorting.
I think most of the photo organizers offer this? I remember using digiKam back around 2006 or so, and it already had this feature.
I think tags have limited scope. They are great for photos. They are OK for music, but strictly in addition to the main hierarchy. You could use them for text docs, but folders + full text search is much better. And file level tags are completely useless for code/programming
For example, my music collection is big and diverse, and both "year sort" and "album sort" are kinda useless now, because there are actually multiple disjoint subsets. There is no point ever in showing me audiobooks for year 2010 and regular music for year 2010. I always only want a subset of it.
This is what I meant "strictly in addition to main hierarchy" -- let me keep my folders, and maybe when I want to go deep enough, I want to browse by tag. But even then it would not be a hashtag-like tags that the original page refers to.
A hardlink is a (poor, partial) implementation of a tag in the file system. A tag will recover a collection of all the elements filed under the same label, and a link can't do that.
I think hard links could. To take the up-thread example, you have one folder called "Vacation2018", and another called "BestDogPics". The photo of your dog on vacation lives in both folders, but hardlinked together.
I think the key thing here is that files can indeed be categorized in different ways, and could - and should - exist simultaneously in different collections with different structures.
I therefore think it'd be worth separating the concept of a "filesystem" from a "folder system" or "index system". That is: keep the file storage itself flat (e.g. in a relational database table), then have different categorical "views" that could be relational and/or hierarchical pointing to these files. Naturally, those collections will have their own sets of metadata for that file.
So for example, you have a file named "img-8675309.png" in your camera's storage. The operating system presents a view of said camera storage, in the form of a flat list of files and some basic metadata like creation date (plus perhaps camera-specific metadata if, say, the camera driver is the thing generating the view). You could then open up views for your 2019 Bavaria vacation ("Vacations" → "Bavaria 2019" → "Photos") and your dog ("Pets" → "Fido" → "Photos"), set a sorting field for each view (for the vacation, probably chronological; for your dog, by however you define "best"), drag/drop the camera file into those views (in the latter case, maybe even drag it into the spot where you want it to show up ranking-wise), and the operating system would then add references to that file automatically (almost certainly copying it into a local cache) in both your opened views and potentially in some system-maintained views (e.g. "Local Files" → "Photos").
One of the slick things here is that file access could be entirely transparent to how those files are stored. For example, those views will of course include your device's internal storage, but might also include external devices (like the camera in the above example) or even remote services (like, say, your social media account). If you accidentally delete your prized Fido photo on your local machine, the "Pets" → "Fido" → "Photos" view could still have a reference to the copy on the camera, or a copy in your social media posts, or a copy in the system backup that automatically ran last Sunday, and thus retrieve it and re-cache it locally (or prompt you to plug your camera or your external USB drive back in so it can check there).
I would like both as well. I can see value in tags but I am thinking from a developer perspective, I'm not sure how I would target a specific file in my code without a tree structure? Maybe I just haven't thought about this enough but it seems necessary.
While I agree with the general sentiment that the mental model of tags is more complex and discovering the right tags is a bigger challange than slapping some tag onto some file, I don't see it as something that needs to replace a traditional file system.
Tags are more useful for certain types of files than others. E.g. let's say you want to mark certain projects done. Instead of moving them to a done folder you can just add tags to them – without changing any paths you change their status.
Tags are also quite useful for pictures, music and other media content.
I never really got, why even in 2019 there is not good support for tags on most desktops.
Windows has had file tagging since at least XP. Not a lot of people use them (probably because the UI for it is hard to discover), but I have used stuff like royalty-free photo libraries that have searchable tags.
Moving and duplicating both preserve the tags: they're metadata. They're part of the reason for those pesky .DS_Store files that proliferate on network shares and non-Mac filesystems, in fact.
> A folder structure is a self-contained map that will help guide me to the correct document.
A hierarchical structure like folders also maps well to how we locate things in real life.
Suppose I want to get a particular book I have. I enter my home through the front door, pass through the living room, from it enter the corridor which links all the rooms, then enter the office, within it go to the bookcase, open the first door on the left, go to the third row, and it's one of the books which are located in that shelf.
In a typical house there's likely also at least one alternative route to get to the office. How do you represent that path with tree hierarchy? Tags (at least nested tags, as shown in OP) allow you to create a generic graph.
For most desktop users this is solved with links and shortcuts; I might navigate directly to a document's location in the hierarchy, but it's more likely that I'll rely on an automatically generated link in the 'recent files' area, or a shortcut I created somewhere to the folder containing it.
Even if there are multiple routes, I still remember by location. What I remeber is "The forks are on the top left kitchen drawer" -- not "to get to the forks from the fridge, turn around and walk two steps to the right; and to get to the forks from the table, turn left, then walk around the cabinet"
How do you represent that path with tree hierarchy?
You represent that by using a spatial Finder [1] (with aliases), instead of a browser. Files and folders can be organized spatially on the desktop and within each window. Since a given window is a permanent view into its folder (there was no way to open multiple windows of the same folder nor could a window ever show a different folder) it was extremely easy to remember what you were looking at, visually.
The classic Mac OS also had tags which you could use to flag files and folders by colour and of course you could change folders into small icon or list view in order to handle many files at a time. These view settings permanently applied to a given folder, so every time you opened it you would see the same window, in the same place, right where you left off.
To someone who may never have used this system, it's hard to describe how incredibly productive it was. Humans have extremely powerful spatial memory and the classic Mac OS Finder leveraged that perfectly. Just like the example of someone walking through their house to retrieve a book from their library or a letter from their desk, you could navigate your folder structure visually, without even reading the file names; it was incredibly powerful!
Same way you draw a UI for different screen sizes: you scale it. If you were doing this from scratch today, you'd use a resolution-independent UI drawing library, such as one based on vector graphics.
As for wildly different aspect ratios (portrait vs landscape) you're just going to have to compromise. As long as you have a mechanism to make sure windows can't get lost outside the bounds of the screen, you should be fine.
It the problem should only happen once, though. After your files have been synced between the devices for the first time, thereafter each device should maintain its own window positioning.
If you scale down a large display to a phone-sized one, either usability is horrible on the phone because touch targets, text etc. are too small or you cannot use the space of the large display effectively.
> After your files have been synced between the devices for the first time, thereafter each device should maintain its own window positioning.
No they shouldn't. That breaks the entire metaphor because now items don't have a single spatial position anymore.
Boo, the finder is the worst thing to happen to UI because it is like pulling all the leaves off a tree and then ranking them in interesting ways. I mean, it's useful and great but at the expense of stalling development on tree/graph representations for the last ~30 years.
I mean look at this article - great writing, lots of good ideas, but not a single picture until page 9 of 10 and then you just get two lists and a card catalog metaphor. Whoo, dynamic lists. The problem with a spatial finder is that it's all about ordering things, which is a hideously constrictive thing even as a dynamic process because it reduces everything to one dimension and encourages quantitative rather than semantic thinking (let me be clear that I'm blaming Apple for this unhappy state of affairs rather than the author).
We can see another instance of this in the mentioned bookmarks; it's 2019 and the only options I have for exploring my bookmarks in any of the popular browsers are as a 90s style menu tree. Why doesn't my browser pro-actively tag my bookmarks and allow me to apply any of many categorical schemas that people share. Likewise, why can't I configure my browser to be more selective about CSS? It blows my mind that websites and apps offer 'dark themes' as a feature when theming has been a basic part of desktop UI for >20 years. I should be able to view any website through a dark theme whose darkness and basic palette I've chosen for my browser, in the same way that I can put on sunglasses rather than having to wait for artificial things in my physical environment to be painted in less vibrant colors.
I'm well aware that there are any number of CSS modifiers and plugins, I have used many over the years. But they all require either restyling individual website elements via a GUI or writing replacement CSS code. This is like saying that if you have an uncomfortable chair the solution is to take up woodworking rather than throw a cushion on it. I am never going to write code to increase my comfort (as opposed to developing new tools) because the discomfort of having to acquire competence with yet another syntax vastly outweighs the likely aesthetic improvement. It would make perfect sense if I was a working designer but I'm not and don't want to be. Invitations to code something oneself are implicit failures to understand the problem, confusing the cup with its content.
I submit that part of the problem is that designers and developers are hopelessly constricted by their own axiomatic metaphors. Take bookmarks; I still use physical bookmarks occasionally, and they're essentially nodes for a particular marker in a one-dimensional linked list (the sequence of paper pages in the book). The browser allowed us to take a collection of such one-dimensional markers and structure them in a binary tree form. That's nice, but by now we should be up to using quaternions to explore 4-dimensional slices in n-dimensional semantic space.
Frankly, if I were dictator of the internet my first rule would be that you can provide content or rendering but not both at the same time unless you want to be in an art museum. UI paradigms are like vessels into which content is poured; the web is the equivalent of having to buy drinks only in single-serving cans or bottles and then telling people to be happy with the huge variety of packaging options, with no regard to the time and cognitive overhead of trying to find what you want behind 500 layers of branding. Economists refer to this as the paradox of choice: little, no, or even negative value is being delivered to the consumer, while vast resources are being spent on trying to slightly expand market share through product differentiation. In some cases, far more economic resources than required for the manufacture and delivery of the underlying product and with significantly higher negative externalities in terms of energy and anxiety.
The problem with a spatial finder is that it's all about ordering things, which is a hideously constrictive thing even as a dynamic process because it reduces everything to one dimension and encourages quantitative rather than semantic thinking (let me be clear that I'm blaming Apple for this unhappy state of affairs rather than the author).
Can you elaborate on this a little more? I'm not quite sure what you mean here.
To me, the spatial metaphor Finder was all about semantic thinking. I'd open up my project folder and all of the documents would be spread out visually, within the 2D plane of the window, and their spatial orientations and groupings carried meaning for me. Additionally, I'd tag things with various colours, such as marking a file red (for TODO) or another one green (for Done). This was just about as far from one-dimensional as I could envision a workable UI (3D UIs have been tried in games and they're pretty clunky).
As for bookmarks? I agree with you. Bookmarks suck!
Well, you make a fine point about the colors, which were something I'd forgotten to consider. I did (and do) use this approach, but with the passage of time my enthusiasm for the maintenance required has really diminished.
What I am looking for is something in the same ballpark as dynamic graph generation as seen here, but with shareable tagging: https://www.mcnutt.in/forum-explorer/
When you say their spatial orientations and groupings carried meaning for me I wholly agree; What I'm trying to say (poorly uwu) is that managing the finder or other WIMP desktop elements is like laying out a beautiful graph in Visio or some other flowcharting software, vs the sort of dynamic graphs that the computer can render for you on the fly.
Taking the example above and imagining it extended to disks and documents as opposed to topics and comments, suppose you could have nesting nodes, whose size might be a treemap-like function of their content and whose layout might be manged with Apollonian gaskets; and imagine further that I could click on the name of individual contributors (isomorphic to file properties or curated tags) and quickly see a cross-section of their contexts - not the text of the individual comments, but the immediate neighborhood of the graphs in which they appear; and imagine further that as I applied selection criteria of various kinds to refine my graphs, their unique combination would itself form a metagraph which I could save at any time, like the key to a particular collection.
That last sounds a bit handwavey, but I'm imagining a fairly small graph that would map about the same amount of information as a regex string or polynomial expression. You wouldn't read the content of these graphs (unless you really wanted to) any more than you measure the ridges and notches of your most commonly used physical keys. They'll just become shapes you recognize and name to unlock your favorite perspectives, fulfilling much the same function as the extension buttons on your browser toolbar.
Taking the example above and imagining it extended to disks and documents as opposed to topics and comments
I like to think of it in terms of the real world objects the metaphor is meant to mimic. So I think of documents the way I would pieces of paper on my desk and control panels like light switches in my house (to borrow John Siracusa's example).
On the other hand, something like an address book or a filing cabinet full of tax records is not what I'm interested in organizing spatially. Instead, I'd use a purpose-built tool such as an address book application or document-oriented database. Likewise for photos or music, which lend themselves to custom database applications of their own.
I'm not sure what your treemap-like graph database would be optimized for, other than disk space cleanup tools (which I have used).
Tags are like a book index while folders are like a table of contents. Both are very useful, but I'd say an index is more useful as it can scale very well across multiple books. (Such an index would have its own table of contents, I think :)
You can have hierarchical tags even with a flat system, just name them "foo", "foo/bar" and so on. (And if you design a tag system nothing prevents from adding a hierarchy of tags.) But you're right tags need discipline and single source. There's no way to have a community-based tag soup without some sort of tag police.
Another problem is that all tag-based system I saw (not many) looked very much in their infancy. All they had was very basic flat tagging without hierarchy and very basic search. These are not enough to realize their potential.
Tags are a strict superset of folder functionality. There's nothing you can do with folders that you can't do with tags. You want nesting? Tags can nest. You want visual clues? Tags can have visual clues. Folders can be and sometimes are implemented as tags.
Well, tags and folders are identical. You can have the same file linked to from multiple folders at once. `ls` will even show you the number of folders the file is part of in the second column.
I agree that there's lots of value in the simplicity of a tree hierarchy as a default, but sometimes the extra power of a tagging/labeling system would be valuable. Essentially it gives you a generic graph structure. Others have mentioned that this is valuable for things like media, ie photos that often need to be members of multiple album collections. It could also have other benefits as well, such as being able to visualize your filesystem as a graph, and easily see clusters of data depending on which tags are used the most.
The only heuristic we really have for tree filesystems to get a feel for a subtree (child folder) is the total disk usage of that folder, or maybe total number of files in the subtree.
I agree that tags shouldn't really be some additional meta property on the existing files/filesystem.
I had an idea to use folders as tags, and to make a script that moves folders around according to the usage of the tags. So that way we have hierarchical folder structure, and the folder name is the tag, but it will stay organized according to the usage.
The trick in my case was to come up with a convention for a folder name in order to make it obvious it's a "tag" folder.
I am also not a huge fan of tags. Gmail uses tags. It allows hierarchies of tags. In theory that is more powerful. It's like a folder system where something can be in multiple folders. In practice it just makes things messy.
Autogeneration of topology tags should solve this. I also need tagging agents that run around when I'm not busy and tag things on my behalf, particularly recognizable patterns in existing tagged structures.
It doesn't work because users are unwilling to go through and tag all their files.
We live in a world were people still email word files back and forth, incrementing numbers at the end in a desperate attempt to keep track of versions. (Or just appending "final final...")
Now imagine asking users to tag each copy of that word document that lands in their inbox and gets downloaded? (Assuming file system tags will be tossed soon as the file goes through email).
Google Photos does tagging properly. They only had to spend 5 years building the most sophisticated consumer visual machine learning algorithm ever made. Then they made it work on animals. (Why haven't I seen any discussion around the fact that Google has trained a model to identify animals by their face? Super cool!)
But even ML is only ever going to be able to ID a subset of content.
(Google's auto tagging of things is mind blowing though, did you know you can ask Google for a list of all restaurants you've visited in a given city? Creepy but useful!)
> It doesn't work because users are unwilling to go through and tag all their files.
I tend to agree -- in fact, I was just describing this very idea (that users will go in and mark up their all their content if we just give them the tools to do so) the other day here on HN as "the metadata delusion": https://news.ycombinator.com/item?id=19515848
But it's important to note that tagging as the main usage scenario is envisioned by the linked article, not by those earlier RDBMS-as-filesystem projects. Those older projects assumed that much if not all of the metadata associated with data would be generated and managed by applications sitting on top of the database, using it as their data store. Users could add additional metadata if they wanted to, but they didn't have to for the system to be able to offer benefits.
The RDBMS-as-filesystem argument went more like this. If you want to be able to filter all your documents and only show the completed ones, then yes, allowing you to put a "done" tag on those documents is a crude way of doing that. But if all applications have access to a common data store, more sophisticated options become available, like letting developers build a workflow application that can read your documents and track changes, log approvals/change requests, etc. Then you wouldn't have to tag done documents "done"; you'd just use software to do your work, and your applications would teach the system what it means for a document to be "done".
(Of course, those older projects all fizzled out in the marketplace so we never got the chance to find out how these theories would have played out in practice.)
"It doesn't work because users are unwilling to go through and tag all their files."
This doesn't make sense. They "File -> Save" to a location, right? That's a tag. All metadata is tags. People give filetype suffixes to their filenames, right? That's a tag. They might "Save As -> Format", right? That's a tag. Maybe they "File -> Print" on January 17 at 3pm? That's a tag.
Oh and ALSO they are a 'superuser' who wants the photo to go into "Great Pics Of My Dog" tag? Cool, they apply that, and that's also a tag.
People can handle change, people can handle simple concepts, people use tags today, people can handle tags generally.
Getting users to tag data has already been attempted. Heck Office has had this built in for years (decades?).
We live in a world where after nearly 30 years of the folders paradigm, the vast majority of every day users still store everything on their desktop.
(Though IT policies are working to change this, forcing files to be stored on network drives and such).
Some more advanced users may use a folder called "work".
Years ago I went through and tagged all my photos. Of course the photo management system I used up and died. After that I gave up. I do have some photo organized and stored locally, but now days Google Photos handles most of that for me, including the auto-tagging.
When people sit down at a computer, they generally have a task in mind. Be it write a document, calculate with a spreadsheet, or send a picture of their dog to a friend.
Most people don't think ahead to "5 years from now I may want to find this document so I should think up of every possible word I may search for then and apply it as a tag."
Tagging data takes time, and a good deal of thought to have a coherent system. Then, technically, those tags need to persist and transfer across disparate OSs.
I personally love the idea of tags. My capstone project in college was exploring different UIs for tagging files, and trying to make the process as fast and friendly as possible.
I actually think it has less to do with performance problems than the significantly more complicated mental model one needs to deal with relational databases. Hierarchical files and folders are very simple.
The theory was that you would never really interact directly with the database; you'd interact with applications, which would just happen to use the database as their data store. So developers could use it to build rich interfaces that were appropriate for the tasks their app was supposed to tackle, and those interfaces would shield the user from the complexity of dealing with the database directly.
This sounds similar to the mobile approach, excepting that the underlying fs is heirarchical. Do you have any idea how this worked with paths in the terminal?
I'm not sure any of them particularly cared about the terminal. Remember, this was the '90s; all the companies that explored this (Microsoft, Apple, Be) were deeply invested in the GUI as the Future of Computing™.
It's a bit of an chicken-and-egg problem - or as we like to say, an equilibrium of many.
Every attempt I did to use any great tagging system, in pretty much anything ever, had resulted in the system/app/infrastructure (etc.) dying sooner or later, which is why I don't use any of that stuff, creating the same externality for others.
Folders, on the other hand, are stable. You can't change folders, at least not simply.
Jumping between those equilbrias needs a better strategy than "offer tags".
Because that's just what every user wants -- for all the performance gains of their new, fast hardware to be soaked up by OS administrivia they didn't even realize they needed. I mean, FS performance under Windows is already such garbage that an SSD is basically a requirement for a usable desktop; a little more chug is basically just gravy on top by this point.
Don't they need it, though? People are already using layers over the filesystem, like "media libraries" and "document managers", which undoubtedly trade performance for "administrivia". Maybe it'd be better if this was provided by a common layer, much like the FS already provides a layer over disk blocks.
As someone that had to interface sharepoint once or twice, I cannot get behind the idea to replace folders. Folders are just a simple tree structure. Like any table of contents or index. That is not a concept too hard to understand. My 75 year old grandpa mastered this in about 10 minutes.
I have much more problems with people that ask how they can scrap their images from smartphones. Because they don't know where their custom app likes to save them. And I have to search too. This is 100 times worse if you do not use the one way the manufacturer intended that might not be applicable to your use case.
The article mentions some really good ideas for window managers. I currently use a widescreen curved monitor that is pretty awesome, but the normal windows window manager is actually quite decent. You just use a few different hotkeys. Nothing that inhibits "productivity".
Yet files are bigger now and more numerous than ever. We definitely want rich indexes of files sometimes but files can be so much more than rich data for users — they are also cheap book-keeping for programs for users. Imagine trying to implement a performant database on top of this database. No amount of technology can make O(log N) a substitute for O(1).
This is interesting, though a lot of the claims are highly debatable, but one really stood out to me:
> the tasks you do are more complex. That’s where voice input shines.
I don't know how far into the future this is supposed to be implemented, but I still have yet to find a form of voice input that's remotely accurate enough for anything "more complex". Current voice input seems to rely heavily on the scope for commands being limited, and even then it breaks down often with names and such.
Solving that is simply a matter of developing a way for the computer to pick up inaudible or mouthed commands. Not a trivial task, but it does seem doable with current technology.
Okay so basically controll the user input with muscle movements. Hm wonder if some other group of muscles aren't more suitable for that, how about trying say our fingers for starters?
edit: I've read that NASA developed a microphone that doesn't need sound though so it is doable.
Language is much more expressive than your fingers for certain tasks (and vice-versa). This design advocates for the use of _both_, not one or the other.
I think it's very unlikely that any significant number of people can type faster than they can speak.
Depending on the complexity of the task, you might very well have to press "hundreds of buttons" in order to achieve the same result as a single sub vocalized voice command. Not to mention that speech can be far more intuitive than keyboard shortcuts or nested sub-menus for certain tasks.
Again though, this isn't about replacing keyboard input, it's about supplementing it.
Yeah, but I remember spending 20 minutes driving on the highway and trying to get google assistant to play the next episode of my podcast (Not the most recent! The NEXT ONE. Gaah)
It would be 4 taps to get done, and it is impossible to do via voice.
Combining an engine to interpret voice commands with textual keyboard input seems like the best approach here. You get all the useful fuzziness of voice with none of the transcription errors (or revolting workplace noise pollution).
I'm fairly well sold on some sort of "omnibar" interface concept where you just tell the computer what you want with a keyboard. Alfred, Spotlight, Google search, Wolfram Alpha, iCal's smart add (you can just type "2pm next Friday"), the Action palette in IntelliJ, VS Code and Sublime's command palette, and so on. That, just... everywhere. And if you're alone, sure, use voice instead. Just don't deprecate the keyboard: still the most reliable way to get accurate text into a computer.
Exactly, voice works amazingly as a "3rd interface" complementing keyboards and mice which aren't completely accurate, either, fwiw. Voice gives us more speed, frees up UI clutter and can free up our hands.
Oops I was a bit unclear. I meant, include the engine part that matches English statements to commands and parameters but hook it up to a keyboard-driven text box instead of a microphone. So it's:
written text->'AI'->command
vs.
speech->'AI'->command
(Edit: and sure, voice as an adjunct per my first comment.)
Yes, please don't make voice input mandatory. In one place I work, the guy in next cube used Dragon Speech for navigation and typing (he's not disabled; he just thought it's hip.) It was beyond annoying to hear all the command input.
> Current voice input seems to rely heavily on the scope for commands being limited, and even then it breaks down often with names and such.
Voice commands are more like a CLI (short limited commands) than a GUI and the CLI has been proven to be much more composable and scriptable then GUI's. AFAIK it hasn't been done, but adding stdio and pipes to a voice interface could make it shine for complex workflows where a GUI fails.
Eye tracking could be useful for providing the necessary context for limiting the scope of commands.
I also believe that minimizing latency and eliminating the need for hot words will make a big difference in the usefulness of voice commands for more common tasks: https://twitter.com/Google/status/1125815241026166784
Yeah, voice based interfaces have huge potential but maybe its easier to we adapt ourselves to match what its expecting. Like a shorthand for voice. We already learn seemingly random words/numbers/spcial characters in terms of programming syntax. We could do that for voice too.. Or maybe a 'grunt' based interface? A grunt is pretty universal right? :P
Like what do they imagine voice really doing for CAD programs or Excel, ERP, video editing... at least I guess in Photoshop you could switch tools with voice with out moving the brush but that's not that compelling as an example of voice-for-complex-tasks.
It depends on the task and the UI. In some tools like CAD I need to switch tools a lot: add an element to sketch, constrain it, repeat. Switch back imto 3D, pick new construction plane, etc... most tools have pretty distinct names and could be called out faster than doing the mouse acrobatics.
We aren't used to talking to things instead of humans. But I think that voice input would be the next logical step beyond the search bars for progran features that have been showing up in the last few years. These search bars are halfway to freeform textual commands. Entering such commands by voice is then mostly bashing together existing technology to make something new.
This is a fun design exercise, but a couple things really do stand out to me as useful.
First, panels instead of windows -- a thousand times yes, please. Overlapping windows doesn't provide any benefit. Making everything either full-screen or splitting the screen is so much better. There's a reason so much software has moved to tabs and side panels instead of separate windows within the app.
Second:
> The desktop metaphor as the basis of computer interfaces is inefficient and outdated. Today, most of our data exists outsides of files and folders. The desktop worked great to get us started 40 years ago, but it was never built for the complexity and amount of work today.
I don't like the author's particular solution... but yes the idea of an actual "desktop" folder seems silly and antiquated now.
And it's about time that cloud content (whether file-based like Google Drive or content-based like Google Photos) got treated as a first-class citizen in the OS.
If the iOS Files app lets me treat my Google Drive as just another folder... why doesn't Finder? Why do I have to install third-party software for that?
> Overlapping windows doesn't provide any benefit.
Just yesterday, I overlapped a tiny terminal window (resized to be very small and set to "always on top") showing the current disk space ("watch df") over the corner of a larger window which was doing a file transfer. While I agree that non-overlapping panels are better most of the time, sometimes overlapping can be useful.
> treat my Google Drive as just another folder
That illusion would fail quickly as soon as your network connection dropped. Non-local folders have fundamental differences to local folders and should be treated differently. (Local folders can be assumed to always be present unless manually removed, be fast, and unmetered; non-local folders can disappear and/or change at any time, can be slow and/or vary in speed, and might have a per-megabyte cost.)
> Overlapping windows doesn't provide any benefit.
Another example: Overlapping my work windows over the Slack window. The red "you have new mention" indicator is a poor indicator for determining the importance of the mention; being able to see the channel it occurs in at a glance is very nice for prioritization. There's no real need to see the whole slack window.
I think this is one application of the “minimized pane” example the article showed: applications still have a way to show more data than just an indicator.
Alternatively, you could let panes be resized smaller than their “native width”, and get the same effect as overlapping.
Oh yes, the ability to pin a window to the top is one of the features I miss most on Windows. You don't need it every day but when you do it saves a ton of time and clicks.
I have recently been working where I have around 5 windows open.
Editor, Terminal, Documentation, Email, Browser with test profile.
I've found that alt-tab hasn't been working the best for me, but what has worked is having mostly overlapping windows, with a bit sticking out that I can click on to pull it to the foreground.
Maybe I should be assigning some keyboard shortcuts to pull up particular windows, but this works reasonably well, and is easy enough to adjust if I added a 6th window, or whatever.
The idea of panels can be realized today on Linux using a tiling window manager, and is superior in my opinion because it allows you to slice your screen as you see fit, has workspaces and you can have multipe windows in a single slice snd switch between them using a variety of wsys (tabs, etc)
Other users have mentioned situations where this is true (and I have run into them myself too). But tiling window managers + workspaces are just so much more efficient for almost all use cases. And they don't have to have a steep learning curve.
I use KDE with the grid tiling plugin. When I open a window, it takes up the whole screen. Opening another window splits the screen in half vertically. They can be dragged as normal to swap positions. Opening a third window splits one window into two, horizontally. This goes up to a (configurable) 2x2 grid.
There are only four basic keyboard shortcuts to remember: meta+F floats a window to make it not affect tiling; meta+T makes it tiled again. Floating is useful for small utilities like calculators or temporary status indicators; they always float above the tiled windows. meta+Q/E shrink/grow the current window, and the rest adjust size to fit (of course window borders can still be dragged and it works the same).
Since every window is always visible, there is no need for a window list in the taskbar - mine only has the date/time, status icons for notifications, and the workspace pager (map).
Virtual desktops are the last part. You obviously will want more than three windows. I use a 4x4 grid of 16 desktops; 4 total desktops is more practical for normal people. They are mapped to a 4x4 square of keys on my keyboard. A few windows for applications like email or IM have a permanent space on the bottom row of 4 desktops. Normally I stay in the top left desktop and move to adjacent ones once space starts running out on the primary one.
The pager in the taskbar shows an overview of the grid. meta+WASD or arrow keys switches between them; holding ctrl moves the current window along to the other desktop. Oh, and meta+tab is like alt-tab but for workspaces.
This maximizes screen space that can be used. In a floating WM like Windows/macOS, only maximized windows take up the full screen; every other window is smaller. Windows has a 2-pane L/R split builtin, which is a weak replication of my system. If you have three floating windows open, using all the screen real estate is a frustrating task of dragging windows by their titlebars and then using the thin borders to resize them. With my system they just magically appear into place; the only user input is to swap their positions or resize them.
With floating WMs if you have more than 4 windows open you have to either:
A) leave bits of windows visible behind the other, which takes up time to arrange the windows, or
B) frequently focus on the taskbar and select windows, which is very inefficient
Workspaces solve this well, especially when put in a grid instead of a row. Related things are just a single press of meta+WASD away, clustered in their own area in the grid.
It's clear that the floating windows paradigm is outdated and we need something better. But I'm not sure that floating WMs like you see on linux today are the answer (even ignoring their terrible usability for anyone but power users).
Personally, the situations where I actually want to have windows next to each other for any significant amount of time are basically a one in a million occurance. Usually I have multiple "complex" windows open that I switch between (browser, IDE and so on). It would be nice to have them visible next to each other all the time, but there is simply not enough screen space to do so, tiling or not. The other use case that happens very often is that I have a "complex" window open and need to quickly drag&drop a file from the file explorer into that window. Again, a tiling WM is not suitable for this because it would make both windows too small to use.
Many tiling WMs also lack the concept of minimizing. It's often the case that I have multiple windows (for example file explorers with different directories) where I only need one window at a time and the others are minimized for quite a while. But I do not want to close those windows because I will still need them later.
I actually decided to try tiling because using heavy applications like IDEs was annoying in floating WMs - it was a constant stream of interrupting work to go to the window list in the taskbar or alt-tabbing 'X' number of times to go back to that other window.
Virtual desktops solve these problems effectively if the shortcuts are set up properly (not ctrl-alt-1/2/3/4 and ctrl-alt-arrowkeys, which iirc has been the default for a while). When I am programming I have one workspace with just one terminal running vim split into a 3x3 grid of files. Next to it is a workspace with a terminal and Firefox running side-by-side - terminal to compile/run and FF for documentation. Below and diagonal to the the vim workspace accumulates whatever else I'm needing at the time - file explorers, terminals, browsers, GIMP, etc. Switching between these workspaces is easier than alt-tabbing (meta+WASD - only one finger needs to move, and also meta + the 4x4 grid on the keyboard mapped one key to each workspace for long-distance moves).
Minimizing is accomplished by moving it to another workspace: meta+alt+WASD.
The only real issue is drag-n-drop, which requires the sequence of: float the window => move it to the other workspace => drag-n-drop => move it back to the initial workspace => unfloat.
> First, panels instead of windows -- a thousand times yes, please
I don't know what OS you use, but putting windows side by side quickly can be easily done in all OSes I use, including macOS, Linux, Windows, and Android.
> But macOS certainly doesn't come with that. I've got to manually drag the windows and edges and everything
Keep the green maximize button pressed in the app you want on the left in split screen. On the right you'll see a list of apps. Click on the one you want on the right.
I agree with pretty much everything you've said except the part about overlapping windows. I find Sticky Previews immensely helpful in Windows and sorely miss it when using other operating systems.
You can achieve some of it on Windows (10 and maybe 8) by snapping the windows to the sides of the desktop (with the mouse of Win key + arrows). And when you have two windows, both will resized at the same time
> Overlapping windows doesn't provide any benefit. Making everything either full-screen or splitting the screen is so much better.
I disagree 100% and I hate people like you. Please, do not force this on users. I have about 10 programs running at once, all accessible because of the way they are partially stacked over each other.
Based on the previous post of this in 2016, this is a university student's portfolio piece while hoping to find an internship. It's not a product, just a concept.
I'd say amazing job in terms of design and concept. That being said, if this were an actual product, it wouldn't count me as a user. It looks cool, nice, etc. It may appeal to a specific type of user who's fine with relinquishing control over how things are done on their computer, but as a dev, if I chose to change things about my work flow, I want additive changes to my existing flow. Not a full end to end replacement. Lots of things already exist which brings my current flow close to this. This doesn't make multitasking easier for me, it just drops a fat anchor on my normal flow. 3 finger swipe up, cmd-tab, cmd-` to switch between apps, tabs.. Divvy to setup zones in a grid to global shortcuts so you can place windows where ever you want in a key stroke. All those things are natural and happen without me thinking about it. I can manage 30 open apps and 100 tabs in multiple browser windows if I wish and its not slow or confusing.
I very much liked the fact that it knows where you're looking and makes many actions instantaneous, like looking at a link and clicking or pressing a key or whatever it was. That's cool, and if it works well and can be integrated everywhere it makes sense, I think its a great shortcut, saves you having to mouse around. But thats additive to an existing flow. Not replacing the entire thing.
The built in voice assistant does cool things too and I can see many use cases for that too. Not that it can't already be done though. Things like, make a directory called X, open this app, build project, run automation, or run smoke tests on X project, etc...
Still great work though. As a concept it appears usable and neat. Just think its too far a departure from regular work flows to work for most people.
This is the place Linux could shine. High productivity desktops. Windows and Apple computers are mass production items. Even semi-illiterate people must be able to use it. We need a desktop for highly skilled technical workers. One we that allows us to quickly switch context, to have a lot of info in the screen, one that works well with very big screens. A jet pilot has a specialized interface, why not programmers could have one?
Hey, I always wanted a desktop that tracks my gaze!!!
It's already there on Linux with tiled window managers. You can quickly switch between different contexts with work spaces, almost no pixels are wasted and it scales beautifully with high resolution screens.
Our gnome-shell plugin https://github.com/paperwm/PaperWM actually implements the tiling aspect of this mockup. Although we allow partially visible windows and mixing tiled and floating windows.
This project seems interesting. I usually use a 4x4 grid on i3. I wonder if your pluggin can handle this kind of grid. From the pictures, it seems like new windows are arranged vertically only, as columns.
Yes, new windows is added as columns, but it's possible to manually tile two (or more) columns into rows after.
Adding some sort of toggle that opens new windows as a new row is certainly possible. Interesting idea to open in new row if the current column only has X windows and new column otherwise :)
This rule could also be window type specific. Eg. if the current window is a browser open in new column. If its a terminal, new row.
I do the same on KDE with the grid-tiling plugin [0]. You can also configure the rows/columns inside each workspace (I use 2x2 for max 4 windows per workspace).
meta+WASD/arrow keys to move in the workspace grid, plus a 4x4 square on my keyboard mapped to each individual workspace, hold down alt too to bring the current window with you.
meta+Q/E to resize the active window, and meta+tab is like alt-tab but for workspaces. meta+ctrl+WASD/arrow keys to swap window positions.
meta+F/T to toggle floating/tiling, useful for calculators etc. Floated windows always stay on top.
I wish workspaces had shortcuts like alt-tab and superkey-<number> only look at your current workspace.
If I'm on workspace 2, and my superkey-1 corresponds to an application that is only open on workspace 1, i don't want to switch to workspace 1, i want to open a new window of the application in workspace 2.
Similarly, if I'm in workspace 2, I don't want alt tab to show me applications open in workspace 1.
To me, these behaviors make workspaces inadequate for context switching.
This can be done, depending on the window manager.
In my awesome window manager setup, super+tab always toggle the last 2 windows on the current tag (workspace). It's also possible to map super+1 to make an application be part of the current workspace (or open a new instance there). In this model, a window can be part of more than 1 workspace at the same time.
It's also there on MacOS (mostly). Amethyst as a tiling window manager + Alfred makes for a great productivity combo. With both of them, most of the pain points in the article are adequately solved already.
That's an excellent point. Having a higher lowest-common-denominator to cater to will definitely make for an interesting UI. The sweet spot is to match the UI specialization with tangible productivity gains. All the modern UI simplifications slow me down tremendously. Gestures are crazy slow compared to clicks and keyboard shortcuts. Even something universal like pinch-to-zoom, I can do it faster with mousing over the place where I want to zoom, hit control and scroll to zoom in/out. Also, I think having a voice command based AI would amazing - with the caveat that it actually works of course :).
Perhaps. I have been thinking about it, but from what I've heard wayland still has some shortcomings. I'm sort of deferring this decision to the i3 dev: when/if he decides to port i3 to wayland (or pass the torch to sway) then I as a user can probably be sure that it's ready.
Well, you can do a lot of crazy stuff with Linux Desktops and it certainly is a good platform for trying new ideas. Nevertheless, if you want to be productive, just take KDE and STFU. Sorry, I didn't mean to be rude and if you like Gnome/Unity better, just go for it.
It is just my personal experience that you can do all sorts of customizations, but in the end, the productivity killers are not your overlapping windows but your news and social media pages.
So what I am using is quite simple:
- Standard KDE window management (kwin) for Firefox, Thunderbird, Okular, Libre Office, Gimp, etc.
- Yakuake with tumx + vim (productivity)
- Virtual desktops, when things get crowded (rarely need more than 3)
The only feature I am missing (about once per year) is the tabbed-window we had back in late KDE 3.
I think linux had the best desktop for a very long time now. Maybe it's not as accessible to casual users but objectively speaking tilling window managers, workspaces etc have been really next level if you're a developer.
I mainly use a workstation on a large desk in an office. If you use your laptop everywhere, on small table, desk, knee, trackpad are great because a mouse is basically unusable. But on a desk with plenty of space ?
Also, from a medical point of view, I think a mouse is better than a trackpad. But I m not an expert in that field.
On a Windows box I would agree, but on macOS gestures are so productive that you actually lose quite a bit of productivity by using a mouse - even Apple's Magic Mouse.
Not the person you're asking, but on my macbook I frequently use:
two-finger swipe right (go back)
four-finger swipe up (overview)
four-finger swipe down (change between multiple windows of the same program)
four-finger swipe left-right (change virtual desktops)
pinch to zoom in/out
The first four would be easy to bind to a mouse button, but the pinch zooming is really handy.
I got sick of not having a pinch to zoom gesture on my Windows desktop, so I bound Ctrl to a mouse button and hold it while using the scroll wheel. It's pretty handy, I wish I could do it on all mice.
In a lot of Windows applications, and in the mainstream browsers, holding down the CTRL key and spinning the mousewheel gives you a zoom functionality. It's built in by default.
I use gestures all the time on my Magic Trackpad 2. Let's say I'm just surfing the web on my iMac. I might use:
- two-finger swipe left and right to go back and forwards
- two-finger double tap to zoom in and out
- pinch to zoom in and out
- pinch to view all tabs
- three-finger double tap to look up a word in the built-in dictionary, which I do a lot to get the definition of Japanese words in English
- three-finger drag to copy images or downloads to a folder or into another window, especially if I'm editing a document
- four-finger swipe up to switch to another application's window, especially if I'm dragging something
- four-finger swipe down to switch to another browser window
I'm using these gestures frequently to efficiently move things around when I'm editing a document, making a presentation, or even when restoring a file to a previous version or copying something out of a file's previous version to drop into the current version.
Not OP, but I use gestures religiously. Four fingers to the side to switch desktops (useful for 'categorizing' multiple projects), four fingers up to view all windows in a desktop, four fingers down to view all windows of the active app. Insanely useful and productive.
Not if you pair it with eye tracking, like they're suggesting. It might actually be significantly better than a mouse in that scenario; though I'd have to try it myself to know for sure.
I'm very doubtful that eye tracking works nearly as precise, reliable and fast as a standard mouse.
Although I do prefer touchpads in situations where high precision isn't required; it just feels more comfortable to me.
That's why the article suggests using eye tracking in conjunction with the track pad. That way you get both precision and speed. (At least in theory; again I'd have to try it to see how well that'd work.)
They try to replace the keyboard and the mouse for more than a decade now, and the only thing they have done is to improve the keyboard and the mouse, or make it worst.
I'm not sure what you're trying to say. Just because previous efforts have failed, doesn't mean we should just give up on trying to design something better.
I'd also argue that touch screens already have replaced the mouse for the average user, so it really doesn't make sense to imply that the mouse is unbeatable.
> Trackpad gesture require a ... trackpad, and trackpad are worst than a mouse.
For values of "trackpad" not manufactured by Apple. General consensus seems to be that Apple's Magic Trackpad is God's own input device, which has rendered mice forever obsolete.
> Speaking of the trackpad, anyone can put 6 fingers on it ? look painfull to me.
>> Trackpad gesture require a ... trackpad, and trackpad are worst than a mouse.
> For values of "trackpad" not manufactured by Apple. General consensus seems to be that Apple's Magic Trackpad is God's own input device, which has rendered mice forever obsolete.
Apple fan boy spotted !
I use to carry a mouse with my Macbook for long coding session.
Don't get me wrong Apple trackpad are really good, but the hand position on any trackpad isn't great.
Your sarcasm detector is either on the blink or not tuned to the band I was emitting sarcasm in.
Rat wrestling is bad enough as it is, but with an actual rat involved at least I can attain reasonable precision with coarse muscle fibers. With fondling the rectangle, not so much, especially when clicks are involved and forget about dragging. I need two hands to achieve what even the rat lets me do with one.
But then again I'm old and cranky. Millennial hands seem calibrated to the fondlepatch, preferring it even to the rat, and I keep hearing gushes about Apple's fondlepatches in particular on Hackernews and r/programming.
It's nice that people don't have to learn Haskell just to use a decent window manager :D I get frustrated just watching others work - it seems like they spend half their days looking for, resizing or moving windows around. Hope it catches on.
Edit: Getting some downvotes and I realized I sound like an xmonad snob. I really don't judge, I just think people deserve something more efficient. If you like your window manager and workflow, then that's great! :)
I appreciate a combination of the two approaches, tiles and overlapping windows, so that I don't feel limited by constraints imposed upon me nor need to manage minutiae like this through endless configuration.
> I really don't judge, I just think people deserve something more efficient
I suppose that depends on your idea of efficiency. Mine is that I'm not fighting the window manager, playing Tetris with all the other tiles, to get things displayed how I want. I find that far less efficient that Mission Control and App Exposé, though I'm as aware as you that others have their own workflow.
The strength of macOS' menu bar is that the menus it contains are static — an application shows all options at all times, greying out the ones that aren't currently available, as a means to enabling all functions to be discoverable at all times. This means that the ability to discover and access an applications functionality is not constrained to requiring an open window, as is the case on Windows and most Linux applications.
If the left menu replicates macOS' top menu, it does so with one weakness: it is constrained by both the presence of a window and the vertical height of a window. This has two knock on effects: (a) that every window an application can present, particular document-based applications, redundantly replicate the menu in every window; and (b) that small windows cannot show the menu in a sidebar. This leads to a user interface full of surprises — can a window show all in its menu? Can a window even have a menu? If a given window is unable to show a window, should another mechanism be used instead?
The alternative is to present the left menu as a global sidebar, like macOS' Notification Center or Windows 10's Action Center. However, this has two major disadvantages: (a) it's a huge waste of space to have a sidebar wide enough to show a reasonable amount of text along the horizontal axis at larger text sizes (especially on systems that support adjustable system font metrics), and (b) the size of the clickable targets is vertically smaller and more finicky to reach than the (relatively) wider horizontal dimensions of text shown along the top (or bottom, for that matter) edge of the screen.
That issue could be remedied by the use of buttons with icons. However, this combines the existing, space-efficient toolbars with the left menu's horizontal screen waste — it takes two paradigms and makes them mutually worse.
Of course, the waste-of-screen-space issue can be mitigated by making the menu disappear after a menu item is interacted with, like Office's File menu, or make it abled to be pinned, staying visible until dismissed manually. However, this now means the user has to manage a part of the user interface that was previously handled automatically, far from ideal. Also, if the menu is able to be pinned and presented as a global sidebar à la Notification Center/Action Menu, should it come and go depending upon which application is visible? If so, how should the left menu interact with existing windows that may be occupying its space? Cover those windows or move them to the right? The former still means the user is required to manually manage something that should be automatic, the latter means that the user loses control over the placement of their windows.
Speaking of window control, I've mostly been thinking about what the left menu would look like for existing window systems where user can place windows freely. Desktop Neo proposes tileable windows similar to macOS' full screen app windows — how should the left menu be presented? If presented per-window, there will be quite a lot of wasted screen real estate; if there's a global menu bar, it would probably reserve a certain amount of the left side of the screen, taking away a significant amount of space that should be for the tiled windows — unless it dynamically resizes all tiled windows horizontally which could be computationally expensive if one of the windows is graphics-intensive, such as a game.
Finally, a layout with vertically-small menu items displayed in a list as the initial way access the functions of an application, rather than as submenus, seems ergonomically finicky for use on a keyboard-and-mouse oriented system, and touch-oriented systems are better off with touch-first navigation hierarchy-oriented user interfaces backed by scrollable lists (as per iOS and many Android apps), exposing an application's functionality in a different way altogether.
This left menu seems to be no more than a hamburger menu, though one with constraints compared to existing solutions such as in Firefox and Chrome today. As an alternative to both per-window top menus and a global top menu, it falls flat ergonomically.
This is more about UX than UI, though. I skimmed it at first before actually reading it and had the same thought, expecting some big visual wow. But the meat is in the explanation. This looks like a fantastic experience.
I think this is a good example of how bad wording in an opening paragraph can turn readers off. Here's what I took from that paragraph:
* destop computer interfaces haven't changed much in 30 years -- that is bad
* people use smartphones and tablets more than desktop computers -- because they are better
* desktop computers should work more like phones/tablets
I don't have a problem with conceptual designs, and I would encourage exploration of different types of user interfaces (both on desktop and mobile). However, after reading that first paragraph on the web site (and watching the video), it was my perception that the author seems to believe the above three points are absolute, and they show nothing that convinces me that any of them are true (and in fact, some of what was shown had the opposite effect).
IMHO a better approach would be to be less dismissive of "older" technology, and rather simply state that you are exploring an alternative interface, and explain specifically why you think it is better.
In other words, make your case for why your interface is better, then let the reader decide whether they are convinced (rather than just stating that a particular kind of interface is better as a matter of fact).
>We now use smartphones and tablets most of the time, since they are much easier to use. ... With people switching to mobile devices for mundane tasks...
Who are these people and for what tasks? Afaict, except for email and Facebook, the promised death of the PC is extremely overblown. In a professional setting, even major phone addicts still use PCs or laptops.
> Overlapping windows as an interface metaphor were invented over 40 years ago with the Xerox Star. Since then, the amount and complexity of how we use computers has increased dramatically. Windows are now inefficient and incompatible with modern productivity interfaces. For more, read my blog post "Window Management is Outdated".
Uuhm [1]:
> The first Xerox Star system (released in 1981) tiled application windows, but allowed dialogs and property windows to overlap.[1] Later, Xerox PARC also developed CEDAR[2] (released in 1982), the first windowing system using a tiled window manager.
Tiling window managers were first, and I think I prefer the less bloated offerings already available compared to this.
What does this do for productivity that existing tiling managers do not? or is this a mac/windows specific thing.
[edit]
Yeah this sounds horrible:
> Click and hold on the touchpad with one finger to open the context menu wherever you are looking. Then swipe to select an action.
How is this better than a right click context menu? it's not, it's a physically slower event, and a UI that is less capable of adapting to it's context due to geometric limitations just cos "CIRCLES". This is integrating phone HID into desktops for the sake of it, this is unification at the cost of productivity not for the benefit of productivity.
"For more, read my blog post 'Window Management is Outdated'".
clicks on link to blog post
Oh hey that blog post doesn't seem to exist. Nice job.
I think I'll stick to the existing windowed interface, thanks. It's designed for a keyboard and a mouse. Which is what I have. Well actually a keyboard and a Wacom tablet, I'm an artist.
Looking at his redesign of application menus and thinking about trying to make that work for the vast array of menu items in Adobe Illustrator (my main art tool) gives me the heebie-jeebies.
Also oh god he wants to banish folders in favor of tags, too. Everyone who has a sweeping reinvention of How The Desktop Works wants to do that and they never really have an answer for how the end result would be different for managing large projects made up of hundreds of files in nested folders, and how saving something with seven tags that reproduce that sort of arrangement isn't gonna be more hassle than just saving it into the appropriate folder.
And then he wants to make it work via gaze tracking and you know I think I'm just done here.
"Windows are now inefficient and incompatible with modern productivity interfaces"
What?!
I have to agree with some of the other comments here - this is change for the sake of it, and the arguments against current-day desktop UIs seem weak at best, comprising mainly of opinion stated as fact.
Arbitrarily sizing all four edges of windows and having them overlap (partially visible, partially hidden) is a waste of time.
Almost always, you either want to look at a window completely or not at all. And you either want to look at 1 window, or maybe 2 or 3 or 4 and so have them either full-screen or tiled.
You generally don't need them overlapping and don't need to show desktop space behind them either.
Modern productivity interfaces are mostly all full-sized windows with tabs and sidebars now, not child windows and floating palettes like in 1995 or 2000. (Modal dialog boxes are still needed of course, but I don't think the author means those.)
No one cares about the pseudo modern bullshit. Those interfaces that you mean are more close to the Xeroc Park ironically, than to a modern floating window manager/desktop.
It's like the more bullshit of so called modern-Material desktop. FFS, Helvetica and brutalism predate like you like 70 years in design, and Motif/CDE/Windows9x interface got brutalism and FUNCTIONALISM right from the beginning.
Material is good for paper-printed books and traffic signs/urban panels, not for a working and interacting device. The same happens with mobile interfaces and Gnome 3: they suck a lot on non-consumer devices. Guess why.
Many people are using high-res monitors now, which invariably come with software to allow quickly snapping windows to a portion of the screen, creating a grid layout where there is no overlapping, and no desktop space either.
This already exists, right now. And I'd say that solutions like the one I describe are better, because they are completely customisable, and I can make windows full screen or minimise them as needed.
I think it's pretty cool. I see no reason why the current desktop paradigm - as used to it as I am - is the pinnacle of efficiency.
It's not so different than a tiling window manager, but introduces some unique input methods and a hybrid touch control system that I'd like to try before I judge.
I also feel that voice can and will play a larger role in input in the coming years. If it works well enough, it can be faster and easier for many. Especially those with disabilities, or prone to RSI as many in the tech sector are.
Tagging to replace folders isn't a new idea, and was introduces by Gmail a decade ago. It works well. I do think that in the days of machine learning, a system for automatically generating tags would make this process even more fluid.
Is it perfect? Probably not. But I find these alternate interface projects wonderful to play with. It takes imaginative and bold ideas to make them real, and I applaud the author for that alone.
This seems to be very much change for the sake of it.
"The traditional desktop computer is struggling to adapt the simple interfaces of mobile devices while also keeping its focus on productivity."
No, no, it isn't. Desktop is far more productive than mobile ever will be or can be.
"The desktop computer hasn’t changed much in the last 30 years. It’s still built on windows, folders and mouse input. But we have changed. We now use smartphones and tablets most of the time, since they are much easier to use."
No, they're not. The reason why the desktop hasn't changed much in 30 years is because it works.
It may not be the prettiest or the most modern or even the easiest for a new computer user to navigate, but it works damned well for getting work done. That's why windows and mice are still around... they just work. This project... doesn't look like it does that.
> Desktop is far more productive than mobile ever will be or can be.
Hear, hear! The desktop computer (and the metephor of "desktop") works. People have learned and adapted to be very productive with it.
I do not want a desktop computer with "the simple interfaces of mobile devices". I don't want to remember three-finger gestures on a trackpad, and certainly don't need voice-activated commands (it's faster and more accurate to type).
Like you, I'd also argue against the statement that "we now use smartphones and tablets most of the time", or that "they are much easier to use". This completely depends on what the user is doing with the device.
For certain kinds of communication, application or media consumption, sure mobile devices are easier to use. They're not that great for long-form writing (email, articles, books, programs), or as a personal information management system (file system, database).
All that said, the designs in this project are beautiful, and there are some useful-looking interface ideas. I found this statement at the bottom:
"Neo was designed to inspire and provoke discussions about the future of productive computing."
They really lost me at the whole 3 finger gesture thing. That's a very unnattural thing to do when you are only ever using 1 finger at a time for 90% of the interactions with the touchpad. Furthermore they have like 2 handed 3 finger gestures to do things like zoom in and out.... I feel like pinch to zoom is a lot more effective and natural.
I completely disagree. I occasionally use 3-finger swipe to switch between desktops on OS X and the "gentle brushing" motion actually feels more natural than trying to "point and click" with one finger.
Now, to be perfectly fair, I also have CapsLock+Shift+H/L keymapped to do the same thing, as well as two-finger swipe on my mouse. The point is, one doesn't necessarily preclude the other.
Yeah I think this mindset is going to be prevalent with Windows users because these kinds of gestures are not native, while Mac users are comfortable with it since it's integrated in the OS.
It’s also a trackpad vs mouse thing. Three finger gestures work a lot better with the kind of high quality trackpads that the majority of macs come with.
And that is the rub. On my macbook pro, which has what I consider to be a significantly better trackpad, I three-swipe without even thinking about it. but that comes from very tight hardware/software integration.
I have only used Linux for the past decade, but your point still stands as I only get to use crummy windows trackpads. Mac trackpads are indeed much better. I probably would be more amenable to gestures with an actual functional trackpad
Three finger gestures are actually pretty nice when you get used to them. I even use the four finger gestures that Windows has somewhat frequently (to switch between virtual desktops)
One-hand three finger (text select, dictionary lookup, file and window drag and drop) and four finger (expose up and down, virtual desktops left and right) gestures are great on MacBook trackpad. At some point they were default in macOS but now I need to reenable them on every install in both trackpad and accessibility settings. Especially the tree finger one (point + middle + ring fingers) is so much better than click and hold to drag.
I have a FingerWorks iGesturePad on my desk right now, a 17-year-old device which is still to this day the most advanced multi-touch consumer product. My experience with this product teaches me that multi-finger input is not at all unnatural as you call it. You do it all day every day with everything in your life EXCEPT your computer; you would get accustomed to it in minutes or hours.
Yes, and the laptop and standalone touchpads as well. Those touchpads are more similar to Fingerworks products than the touchscreen iDevices. I would even argue that Apple's current touchpads with haptic feedback are clearly superior to what Fingerworks offered, if you install third-party software to expand the range of gesture options. I've been using jitouch for close to a decade now, and on a MacBook Pro I use multitouch gestures at least twice as often as I use keyboard shortcuts.
Two finger pinch to zoom is such an awkward gesture for me for some reason, but I recently discovered that some Android apps like Google Maps and Edge have an alternate one finger zoom gesture that I find much more intuitive: double tap and drag up to zoom out, and double tap and drag down to zoom in.
Three finger is the one gesture I actually use on my macbook pro. My thumb sits on the edge of the laptop, my index is on the glass, and my second and third fingers are hovering at the ready. At some point I got into the habit of scrolling with fingers 2 and 3 rather than 1 and 2.
Four finger, or really anything requiring me to lift my palm rather than pivot from the keyboard, is absurd though.
I’d only disagree with your ‘I don’t want to remember gestures on a trackpad’- much like keyboard shortcuts, muscle memory makes for very easy pickup of intuitive gestures; there’s little to “remember”. I’m sure nowadays you don’t have to think hard on how to Ctrl/Cmd+C/P/Z?
I think this mindset of mouse over trackpad, as others have mentioned, is to some extent correlated with hardware quality: IMO trackpads on PC/PC laptops just feel bad to use (material texture/accuracy) relative to an Apple trackpad. Latest models are slightly closer in quality.
Incidentally, this isn't a problem on macOS. I often finding myself wishing that Linux GUI applications and frameworks imitated Apple's keyboard shortcut conventions rather than the derivative scheme that Microsoft ended up with after largely abandoning CUA. Having access to emacs/readline shortcuts on a separate modifier key from the more widely known cmd+X/C/V/etc. stuff is great.
> muscle memory makes for very easy pickup of intuitive gestures;
The problem with muscle memory is that the gestures vary between operating systems or even different devices on the same OS if the user has changed settings. Your muscle memory ends up getting in the way instead of helping and causes endless confusion.
I dont think this is a valid argument: muscle memory is extremely contextual. Your muscle memory for your phone is definitely different from your laptop, and I don’t think the context switch is actually a hindrance.
This would extend to different devices that don’t share the same hardware (at the tactile/interaction level)
> The reason why the desktop hasn't changed much in 30 years is because it works.
That's not how innovation or success is measured in any way whatsoever. We had alchemy for centuries before Science, it was not right, nor was it good enough because it "just worked". This project is exploring a number of ideas and it reads like you are dismissing it because of the poor writing and its interest in non-traditional input methods. I think you would enjoy re-looking at the project if you just ignored the text.
It is fun to think about how much more usable the desktop could be, and it will be experiments and discussions like this one where we make progress towards that goal.
> No, they're not
I totally agree with you. Mobile sucks for just about everything if someone is used to working on a computer, but it is the case that many millions more people have mobile devices than laptops or desktops. If we could work on a system that gave them more power than the trash we have for them so far, that would be incredible.
>That's not how innovation or success is measured in any way whatsoever. We had alchemy for centuries before Science, it was not right, nor was it good enough because it "just worked"
well the problem with alchemy was that it didn't work. The desktop does indeed work in the sense that people who are productive and do heavy work do utilise traditional desktop paradigms. At least I've never seen a highly productive developer who is into tons of touchscreens and arcane finger gestures.
I don't really see how the performance of well configured keyboard commands is supposed to be beaten by voice or touch input, because physically the former is just significantly faster, and importantly, composable.
I like Newtonian physics better as a comparison. Is it a perfectly accurate system that solves all problems? No, it doesn't handle very small things or very fast things. But it's a pretty good model for anything I need to do with it.
I do like some of the ideas here though - especially the fullscreen column mode. It feels like a more powerful and flexible version of the multitasking in macOS plus what's coming in iPad OS. I think it'd fit right in on Mac, if they could figure out an intuitive way to interact with that windowing model.
Not sold on their trackpad gestures. No issues with multi-finger gestures in general, but I like having 3-finger drag as a direct interaction with my content instead of taking 3-fingers to interact with the window manager. Definitely wouldn't want want to go back to the old double-tap-and-drag.
> I do like some of the ideas here though - especially the fullscreen column mode. It feels like a more powerful and flexible version of the multitasking in macOS plus what's coming in iPad OS. I think it'd fit right in on Mac, if they could figure out an intuitive way to interact with that windowing model.
There is a very mac-like feel to this concept. I think it would fit right in. I like that the Panels tell a strong story of incentives: That vertical space is important and at the same time that excessive and persistent menu bars are explicitly not important.
I also like your example about Newtonian Physics, I'll remember that one.
I like the other user's response to your first line.
> At least I've never seen a highly productive developer who is into tons of touchscreens and arcane finger gestures.
I totally agree, it would be silly and I imagine entertaining to watch for only just as long as it took to become incredibly annoyed by the scene. I do think, however, that we don't know what the peak of developer productivity is (nor if we should strive for it, but that's a different conversation). We don't know how humans should interact with computers and how tasks can potentially be represented by different software and hardware paradigms.
There are all kinds of keyboards. Chorded, the Space Cadet, Cannon Cat, European vs US. I don't know anything about Asian language keying but I imagine it would lend an interesting perspective as well. Bill Buxton has a gallery of input devices that is fascinating. [1]
Today, to generalize, the most productive people use the standard system of the Desktop. But they also extensively use paper and conversation and walls of post-its or whatever their shtick; in the future I imagine that we will bring computing capabilities to these more human styles of expression. That, I believe, will look and feel nothing like the Desktop.
Sure, it's not how innovation is measured; self-evidently, something that hasn't changed much in decades is not innovative (anymore).
But it is how success is measured. The closer to ideal a product is, the less it needs to change. Think Coke, Excel, bicycles, SQL, etc. After an initial "Cambrian explosion" in each field, the product was stabilized and perfected.
I hate both Excel and SQL. Well, SQL isn't half-bad, but something like a composable GUI with a proper AWK plugin would be far better than that laggy and shitty "spreadsheet".
ooh, I've got to push back on that one. Just because I can't figure this idea out for myself, maybe this conversation will help.
> The closer to idea a product is, the less it needs to change.
Now that I read this line again I realize yes it is true, but let me push back against the implied claim. I don't think that the Desktop 'needs to change less' because it is 'close to an ideal product.'
I'll propose that the Desktop doesn't change because it is entrenched. There are plenty of good, and more importantly, clear ideas about how personal computing can be made both more powerful and easier for the end user. What we have now is the result of capitalistic incentives. What we should have is deep knowledge work machines and fantastic end user programming capabilities.
Fun conversation starters below, not relevant to the larger discussion, but just thoughts I have in response to your post. I'd enjoy a response just to the above paragraph.
Coke may be an ideal sugar drink, but that seems like a measly category in which to be an ideal product. I can imagine Coke consumption rates dropping over time. (To counter my point, I did find that Coke stock has risen 20% in the last 5 years, compared to S&P500 rising ~15% in the same time). I can imagine them dropping due to the rising popularity and variety of other daytime casual drinks. I can imagine them dropping due to the increasing public awareness of the mal-effects of sugar. I'm sure Coke the company will do fine, but they could have used their clout over the last 100 years to spearhead a global health campaign, and they didn't. So I don't think they are an ideal product which doesn't need to change.
Excel came about pretty quickly (I don't know the specific history of MS Excel) but considering its predecessors. Visicalc was a reason on its own to buy early desktop machines. I've heard of university departments buying early workstations just to permanently run visicalc. But that still doesn't make Excel ideal. Chris Granger has done a lot of work recently to evolve spreadsheets (more focused on programming, but they are intertwined). You could look at Light Table and Eve and say they were a failure because they didn't end up as a business, but I think that kind of product is inevitable in the near future as the access interface that average people will use computers with.
I think you're right about Bicycles. They are an incredible engineering development from almost every perspective. Maybe some pointers to the views Engelbart and more have about training wheels would be interesting to this conversation though. Those are a "thing" that stuck around for quite a while, and appear to not really help the learning of riding a bike. [1]
I don't know enough about SQL. SQLite is something like the most popular database in the world right?
> The reason why the desktop hasn't changed much in 30 years is because it works.
Thank you. I am always amazed at why this is apparently so hard to understand to so many people. Perhaps most of them are young and the drive to innovate (for the sake of innovation) is simply so strong that it makes them ignore simple truths of life? In regards to desktop interface I so wish that many more companies and projects understood this already and stopped wasting our time with new fancy "ways to operate desktop"...
It's easy: the screen is a rectangle of pixels that needs to present information. We have inputs. For the inputs, the keyboard and mouse works the best. There are some other tools and there is some variation, but in essence, that's it. It is simply the best way to operate a desktop for a human. Out of those two initial variables there are only so many ways to organize the multitasking of that information, and we have figured it out already. The human design is not going to change for many more decades, the way information works in this universe is not going to change, the way point to things on a screen is not going to change, etc.
I guess humans, despite being very sophisticated neural networks (ok and perhaps some other architectures mixed into them), still do get stuck in local maxima, and knowing that tend to do that, they always try to get out of the maxima where they find themselves, just in case, to try to come up with a new maxima, to see and figure out if they have missed something. It makes sense and thank god for all those people trying to do this and innovate for the sake of innovation. But to me at least it seems very certainly that the desktop as it is today (and in many ways as it has been for 30 years) - is the global maxima of computing interface. We don't need another way, this is the best way already. Solutions to some problems are just created very early. Sometimes the traditional is the best.
I am probably what they call a power user. There are few people who can pick up interfaces as quickly as me. I tried really hard to make my smartphone productive and I kinda managed (wrote a thesis on it to a big degree during waits at bus stops).
That means I really spent some time looking into productivity on smartphones and still didn't manage to get it somewhere comparable to a desktop PC after years of trying. The best thing was using an external keyboard, but then I could use my notebook as well.
OS owners are trying to adapt their OSes so they work across desktop and mobile, but as a result, the desktop OSes are far less productive.
A classic example was Windows 8, but Mac OS has become far less productive as well. Just look at the new Marzipan based apps, or the push for "Full Screen everything!" which is the complete opposite of the OS X paradigm.
In the Linux world as well, icons and buttons have been made unnecessarily larger to serve potential mobile and touch usage.
> In the Linux world as well, icons and buttons have been made unnecessarily larger to serve potential mobile and touch usage.
Please choose more specific language for that last point, because while I agree generally I only see ~two desktop projects (Unity and KDE) that have prioritized touch in the foss world (vs. the couple dozen others that haven't).
Meanwhile if I started using libinput-gestures and tweaked my Awesome config a bit I could have a desktop that more or less replicated the OP.
Have you had to buy a new computer screen lately? When you're running a Linux desktop at 200dpi or more you will appreciate the larger icons. But on low res screens this immediately feels like a waste of space, partially because dpi scaling is such a mess.
Seems weird to design an desktop that only works with computers that have trackpads. It's basically a laptop only OS, even then though it doesn't work for a huge swath of people who prefer trackpoint.
Most laptops don't have eye tracking either. I believe this concept is explicitly ignoring backwards compatibility with existing hardware in favor of trying to envision the best UI possible with current technology.
It's also worth noting that the ideas they're proposing already exist, at least to a large extent. The "panel" system just looks like a tilling window manager, tags were a thing on BeOS and are implemented on userspace by KDE etc. for global search to work better and you can already control your PC by touch & voice, macOS puts Siri front & center for that matter, (not that I think that's a good thing).
> This seems to be very much change for the sake of it.
That's fine. I'm all for people experimenting. My opinion is that this "rethink" has been rethunk multiple times now and hasn't found enough friends to be successful, but hey; maybe all those other attempts at a 'productive' rework of the traditional desktop were just poorly implemented and this is The One!
What I am most thankful for is that this isn't being imposed on some existing, working, healthy desktop environment. I have suffered through too many iconoclastic desktop nightmares. They've all failed and had to be reverted and I loathe the thought of suffering another one. Metro, early KDE Plasma, Unity... I'm fed up with it and I'll bounce from whatever platform tries to put me through it again.
If this or any other desktop "rethink" is really so wonderful and amazing that it deserves to appear someplace that matters then let it demonstrate that by accruing a following on its own merits; DO NOT try to foist it onto some innocent captive audience. All that does is generate an epic amount of immortal hate for your work.
> This seems to be very much change for the sake of it.
It seems like exploration more than change. You seem to dislike the ideas without providing much substance other than reiterating the cliche "if it ain't broke don't fix it." It's unfortunate this is the top comment as this type of feedback doesn't provide much for the author or anyone else. Is there something specific you dislike other than change itself?
Not op, but I agree with them. To me, it's more like "if it ain't broke, don't break it". We already know how anti-productive mobile UI is and we've already seen attempts to make desktops more mobile-like fail hard (e.g. Windows 8).
> It may not be the prettiest or the most modern or even the easiest for a new computer user to navigate, but it works damned well for getting work done.
Just to troll a little-- the "prettiest or the most modern" comment leads me to believe you're arguing for desktop applications which get installed to the OS as opposed to applications which run in the browser. But I find browser apps as accessed on the desktop to be prettier, more modern, easier to discover, easier to install/re-install/remove, more accessible, and vastly easier for collaboration, than the native applications written for a given platform. On the desktop they run much faster than on mobile. And the thing is plugged directly into the wall so you don't have to worry about battery drain.
(The one caveat to this being audio/video/3d creation-- for most of that you still need proprietary native applications for workable and stable UX.)
I'd really like some kind of Linux distro that boots me straight into Firefox's most stable auto-upgrading release, then have some keyboard shortcut to gracefully degrade down to xfce Debian or whatever for everything else. Perhaps with a default dialog:
"Warning: you are about to drop down to a Debian box. Refresh Button, Back Button, Touchpad, Audio, Wireless, and Fonts will all be set to Frustration Mode. Ok?"
Funny I went back to XFCE out of frustration of the more modern desktop experience on other DEs
Elementary's Pantheon is very nice until you have to consistently work for hours a day on something
Same goes for Gnome/KDE with KDE coming second just after XFCE for its ability to set the scale factor to decimals
True information ergonomics is subtle, you need a fair amount of laissez-faire so the user feels in control but at the same time fill gaps at the right time.
I remember being shocked at an old AS400 application that was so lean, easy to learn and use and so useful. It blew any admistrative or even web (new trend at the time) application I've used in every dimensions.
Of course it was about to be retired and replaced by .. a web app.
Ditto, that's why I hate today MS Access and all that shiny crap.
With a simple TCL TUI interface to some SQLite database, you could do the same stuff even with a built-in help with no effort at all, and running with half of the resources.
Phones and iPads _are_ much easier to use, for a lot of things. Not necessarily for writing essays or doing spreadsheets, but there is a whole range of things that are significantly quicker and more intuitive on mobile.
For what its worth, I recall this was like a student project from 2016 or something. I could be wrong, but it seemed like an impressive output under the circumstances.
I find the Windows UI to often have infuriatingly poor ergonomics, especially when it comes to mouse usage. For instance, why the heck put that closing cross button so far away on the right, which will build you some arm pain quite fast ? Why are some overly used buttons so small ?
When I watch people that are not used to keyboard shortcuts use Windows (explorer for example), I'm quite terrified at their painful, and consequently slow, mouse-based workflow.
one reason might be because with the fully opened window, the 'close window' cross is in the top right corner of the screen which is one of the infinitely large targets.
That’s a weird complaint. I’t like saying “yet another application programmed by a programmer”. Well, who else is going to do it? It’s right there in the name!
I’m guessing your objection has to do with a GUI being built by a designer that has a sense of visual hierarchy but no sense of usability/interactivity/ergonomics. I’m not sure if that’s the case here, but with that I can agree.
Do non-technical users typically do a lot of actual work on their mobile devices? Ask a developer to write a full software suite on a phone and you'll get basically the same response that you're describing, just from the other direction. This analogy just doesn't work because it's simply the statement that people who don't know how to do X will struggle to do X.
>Do non-technical users typically do a lot of actual work on their mobile devices?
What does it has to with anything ? the discussion is about desktop UI.
And non technical users are a large chunk of people doing work with computers .. devs are a minority. I am not sure why I need to mention this point, but looks like that's where we are.
> it's simply the statement that people who don't know how to do X will struggle to do
That's a tautology .. if the only test for an UI is whether it is usable with step by step instruction, there is no distinction between e.g. MSDOS and Windows 10.
Smartphones cannot cheat Fitts's law, they will always be much slower to use than desktop computers.
> Ask any non technical user to open an app that is not on the desktop, or to find a picture hidden in some random folder.
What does it have to do with productivity? Ask them to find a picture 100000 times on a smartphone and on a desktop having similarly capable software but with smartphone/desktop specific UI features. But you didn't suggest that, because I'm sure you intuitively realize how ridiculous this comparison is.
> Smartphones cannot cheat Fitts's law, they will always be much slower to use than desktop computers.
Why would Fitt's law have anything to do with smartphones being slower? If anything, it is the opposite.
A phone screen is smaller, so all targets will be close to each other - and Fitt's law says these will be faster to access than on the large screen of a desktop.
BTW, that's also why interfaces on mobile have big icons with lots of white space around them - Fitt's law says that bigger targets are faster and easier to reach.
To fit equivalent desktop functionality to the smartphone you have to make targets either smaller, which will make them impossibly slow to use, or hide them behind other targets, which again will make them slower to use requiring hitting multiple targets instead of one. I.e. can't cheat Fitts's law.
> targets will be close to each other
Close to each other targets are effectively smaller targets, because the area you are hitting them with has some dimensions and has to fit in a smaller area to avoid hitting closer targets too.
But in desktop everything will be far away, so it doesn't provide any advantage either. And hitting the targets that are visible will be faster on mobile.
Ask anyone to do any real work – i.e. to be productive – on mobile and watch them struggle, at least in relative terms vs. how they'd fare using a real computer.
Or any job that requires creating documents or spreadsheets, managing inventory, and so on and so forth. A small business might start out doing these things on a mobile device, but a some point you just are not going to be productive trying to keep up with these things without a PC.
The closest to using a mobile device for "real work" that I see regularly in the real world are those Square POS systems that are basically just a tablet on a stand with a card reader. Beyond that, it's just things like vendors at a farmer's market or something. Not really high volume.
Fortunately some people ignore the standard "realistic" advice that doing a thing is pointless because the status quo is good enough.
Perhaps the desktop is adequate for you, but for some of us the thoughts come so quickly that the interface becomes an immense bottleneck. For some types of work, tmux, zsh,and a low latency terminal ease the pain. For other types of work, a highly optimized GUI is beneficial.
I welcome new experiments in user interfaces. While Neo might not be "the solution", it might bring new ideas that eventually get used in other better solutions.
Yeah, because introducing new stuff always completely replaces the old stuff and doing things the way we did x years ago because we're used to it is such a great recipe for progress.
A desktop is not a mobile device. Design practices on one don't always translate well to the other. Learning how to use the keyboard well (i.e. touch typing) and learning one's OS's shortcuts allows one to be pretty damn efficient with whatever they're doing on a desktop. I'd posit that any near-term productivity gains are going to come from refining personal habits (e.g. minimizing external distractions and actually doing work).
AWK worked fast enough in a previous article referenced from HN, and that's software solidified in the 80's.
Virtual desktops and someting akin the FVWM interface is truly the best interface ever. No need to iconify(minimize), windows because all junk can be sent to different pages on your environment, focusing on your actual work.
It may be outdated according to the vision of the Gen-Z, but it works well and ridiculously fast to grasp.
Just personally speaking, I find macOS already has a version of many of these features implemented (using their touchpad/gestures) in a more polished way—and I retain the mutability of the underlying system. It also has tagging, and the new 'stacks' on the desktop add another level of categorization.
I can even full screen any application and line them up as workspaces and navigate between them in a "carousel"-like fashion by swiping with four fingers.
I don't have a left hand, I don't have six fingers, guess I'll stick with my mouse.
I'm sure this could be adapted for one-hand use, but- and maybe I'm slightly biased, but I think even if I still had two hands I'd prefer interfaces that don't require me to use both to do simple actions.
In the modern design world, there are a lot of assumptions that everyone is able-bodied, and it's getting worse.
Whereas, I am able bodied, I'm also left-handed, which is annoying on a daily basis. Ignoring all the real-world issues, touch interfaces are optimised for right-handers. A 'flick' gesture for a right hand, is a 'push' gesture for a left hand, the amount of times a touch screen or touch-pad based OS mis-interprets my badly-done 'flicks' for 'push' and the screen scrolls instead of going back or forward. There are endless issues just like this, all because my finger-profile is mirrored compared to right-handers.
If Google enforce the 'pill' navigation on newer Android versions (right now it's an option), I'm going to have to seriously consider switching to iOS, as the 'pill' depends on right-handed 'flicks' to the left to 'Go Back'. Which for left-handers is a 'push' action, and won't be properly recognised.
Yeah, right now my biggest challenge is VR. I have been able to work around everything else, but VR is increasingly becoming the domain of those with 10 fingers and two wrists. I can play beatsaper by taping the controller to my arm, but the scoring system punishes me for not having a wrist. /sigh
Re: pill navigation- I find the navigation bar to be an outdated concept and I don't know why we even still have it at all. I use an app called "edge gestures" to navigate with swipes, and I disabe the system navigation bar. I suggest you try that.
Six finger gestures feel like a huge step backwards to me. It's bad enough I have to move one hand off the keyboard, but now I have to move both hands? Clearly this was designed by someone who exclusivly uses a laptop with a trackpad in the middle.
What are these tasks that people "switch to mobile devices for" while using their desktop? When I'm working on a desktop, there is literally nothing a mobile device does faster, or more conveniently O_o
This actually seems like it would be pretty good on something like the Librem 5! I like the idea of a tiling-esque window manager with this level of flexibility, especially on a phone. The launcher/sidebar/fitting/pinning/minimization features seem like an improvement over existing touch interfaces, especially on linux.
I don't think I would abandon my i3 setup for something like this for work, but I could see myself happily using something like this in a more casual setting. I hope that some of these ideas see the light of day with an actual implementation.
Moving to gestures on a desktop is a productivity nightmare.
There is certainly room for improvement, I like the panels window management, with very large/super wide monitors third party solutions are often used to provide similar functionality.
Tagging/multi-patch hierarchies over folders could be really really useful, but potentially more of a nightmare if not done really well.
The lack of leveraging keyboard commands is the biggest failure here. Its basically designing a mobile interface for a desktop, ignoring the actual usefulness and reality of desktop interfaces having functional keyboards and mice.
The "Alan Kay" references in the video is a good cue of what's on the author's mind -- I guess the goal is to create an environment for contents, not something like a general-purpose desktop.
Quite ambitious I'd say! But I would also suggest that this goal (if I guess it right) could be better explained than just claiming this to be a "desktop reimagined", considering that a commodity desktop has neither a trackpad or eye-tracking device.
The discussion here has already made it clear -- hashtags for file organization is anything but new. It can be even simulated with a filesystem pretty easily. Just post everything into a big content folder, and then create a folder for each tag, then `ln -s`.
It'd be more awesome if a miner extracts semantic tags and metadata from the files. Not a new idea either, there were google desktop, gnome-tracker etc. macOS spotlight seems to be the most popular descendant these days. Microsoft is making another push with Cortana/MS Graph/Windows Search or whatever they call it.
The relationship between the contents is also an interesting aspect. Hashtags just throw things into hash buckets but do not help to relate in a broader sense. Even wiki links/hyperlinks are more effective. Project Xanadu also has interesting ideas that a reference link addresses by content so that you can reference a portion of the source (without the source defining them, unlike html anchors).
But again, this was the vision of the pioneers, but did not take off smoothly. IMHO partly because this idea is too big and involves a lot of smaller (yet still challenging) pieces (natural language understanding, resource description, app interop, and also, including the UX problem that the op is trying to solve) and none of the previous trials have both the depth and breadth to cover enough aspect of a user's everyday routine with a desktop environment.
> could be better explained than just claiming this to be a "desktop reimagined", considering that a commodity desktop has neither a trackpad or eye-tracking device.
I enjoy dragging my windows around, resizing them and arranging them in arbitrarily complex ways. I don’t like being constrained to work in a way that someone else came up with. I even like the act of dragging windows and such, I’ll even do it when I’m thinking as something to play with and it makes me feel more connected to my work.
The tiling mechanics[1] looks very similar to our gnome-shell plugin: https://github.com/paperwm/PaperWM (we also allow vertical tiling and mixing floating windows)
To my knowledge there's no any other tiling window manager that implements this mechanic(?) Ie. traditional tiling WMs force all windows in a workspace to fit within the monitor.
This mockup (and paperwm) organize the windows in a non-overlapping strip that is allowed to extend beyond the left and right monitor edges. This allows for a nice spatial map.
Say my monitor has room for two windows but I need to use 3 windows. With a tiled strip this workflow is quite nice: (| indicates the monitor edges, AA window content of window A, etc. ^ marks the active window)
C|AAB|
^^
<switch to prev window>
|CAA|B
^
A tabbed tiling achieve something similar: eg. put A and C in a tabbed frame, but then it's not simple to view A and C at the same time.
It's also possible to define more specialized operations: When one window is primarily used for input and two mostly for viewing (eg. editor, documentation, code-artifact) I use the following setup:
|AAB|C
^^
<swap right neighbours>
|AAC|B
^^
A workspace grid (a couple windows per workspace) also gives a spatial map, but does not allow to look at windows from different workspace at the same time.
In addition we have a floating layer that can easily be toggled. Useful for windows I need access to from a large number of places (across workspaces, etc.)
We also implement touch-pad gestures to switch workspace and scroll the tiling left/right (only on wayland)
The dimensionality on these things is so high. The only way to tell if this idea is any good or not is to make it and use it for a while.
Linking out to a bunch of other people's opinions and tossing-in some Alan Kay quotes doesn't add much. IETF also designs on-paper instead of in-code, and it is always a hot mess.
This "project" isn't even vaporware, its bellow that. I don't thinks there's even a word for it. And even in concept its seems, to be nice, lackluster.
Its a smorgasbord of random concept from mobile, things already available in desktop os (panels? snap to edge?) and frankly absurd thoughts like organizing files by tags or voice/eye tracking based interfaces at this point in technology.
There's a reason people with large motor disabilities still use those stupid rods versus eye tracking or voice recognition.
Based on the projects from the author on the page it seems like hes run-of-the-mill idea man who jots down whatever brain blast he has before bed and spends the rest of the 99.9% of the time in the project designing the website instead of thinking the idea trough.
I feel like the best desktop interface I have used is Ubuntu's workspace implementation. Typically you have one app, e.g. your browser, per workspace and seamlessly transition between them using ctrl + alt + arrows. The animation is very snappy and doesn't get in your way.
The workspaces are laid out in a grid (e.g. 4x4) so you naturally adopt a convention for where you place applications. For me, I have browsers and desktop apps along the bottom, terminals and IDEs along 2nd row and then GVim instances in the remaining top two rows. I can almost instantaneously switch between any window I like.
I like this setup so much that when I was using MacOS for a brief period, I installed TotalSpaces2 which emulates Ubuntu style workspaces.
Looks really cool. I especially like the ideas around eye tracking and gesture input.
There are some things that might not work out as well as the author is envisioning, but that's what usability testing is for. I'd love to give this a try if it ever became a thing.
Kind of like QWERTY vs DVORAK keyboard. Qwerty just works; but technically, Dvorak IS faster. The design presented is definitely a well-thought mash up of what I've come across through multiple projects with creative and cutting-edge UI/UX.
There is a lot going on, and it definitely feels more like an experiment than design informed by all the relevant human disciplines.
I am not sold on panels and snap in place. I would much prefer to see something like a zooming user interface. Mess is a necessary step to go from broad to clear thinking.
Also, trackpads are not necessarily ergonomic and may hurt your fingers in prolonged use, though the sub-menus designed for swiping are interesting. The gaze tracking is a really interesting idea, and I wonder if there are anything out there that pushes this idea. Anything that can improve ergonomics is good in my book, that's the most pressing pain point of modern user interfaces and hardware.
Experimenting with interfacing and desktop technologies is a massive, thankless and risky job. Anyone who tackles it is a braver person than i am.
I've been using comps since the early 90s and as far as Desktop interfaces are concerned, there have been 2: Full CLI and Windows/Mac/Gnone/Other Clone. Touchscreen mobile has had a similarly homogeneous "evolution". Current OS frontends are fiscally optimal, not operationally so. That is to say: as soon as a company had a reasonably usable interface, everyone copied it instead of opting to fundamentally rethink the landscape.
Hats off to hackers and designers taking on this challenge.
How do you add security context easily using tags for file access? Tags seem pretty broad, but I guess you could have an ACL that excludes access unless you have access to the tag of that application name. But you also want to ability to allow specific user/group access, so it seems like you'll have a complex interaction of tags that exclude access and allow access.
I imagine there's some good prior work out there regarding this, but the complete lack of any mention of how this affects the security of the system isn't promising.
It's a nice idea but you don't need to reinvent an OS to achieve most of these concepts. In fact, you can do almost all of these things today on Linux using i3 + rofi.
Actually, i3 is one of the main reasons I use Ubuntu on my daily basis (the other one is apt). As far as known there are other applications similiar to i3 for MacOS, but they are not as good as i3. So sad.
exwm [1] is the best change I have made to my desktop experience in terms of improving productivity. For a tutorial on how to set exwm up see Uncle Dave Emacs Tutorial 14 - EXWM aka managing X windows with emacs [2].
So it is a tiling wm with eye tracking and a database for a file system? Seems a lot less sexy when you put it that way. I also see this being very frustrating to use!
All it needs is a "reloaded" or "evolved" and it'll be just like every other design-wank "reimagining" of a desktop UI I've seen for the past 20 years. Show me -- with data to back it up -- how this makes me more productive and I'll give it a closer look.
Best stick with "reloaded" for now, you've already got the word "Neo" in there and "evolved" is Microsoft's thing.
> The desktop computer hasn’t changed much in the last 30 years. It’s still built on windows, folders and mouse input. But we have changed. We now use smartphones and tablets most of the time, since they are much easier to use.
The wheel hasn't changed much in the last 1000 years. It's still built round, with spokes or other structs. But we have changed. We now use planes and boats most of the time, since they are much easier to use.
That's a horrible analogy. And factually wrong. We use boats because wheels don't work very well on water. We use planes because they're faster. We don't use either anywhere near as much as we use trucks, cars, rail, and wheelbarrows.
When I was a teenager, I used to really hate overlapping windows. If I'd known what a tiling window manager was at that time, I probably would have switched.
As I've gotten older, I've come to appreciate overlapping windows much more. I find them a really good use of screen real-estate—even on relatively large screens where I have room to spare—and wouldn't ever want to lose that functionality.
Seeing other things around the edges of your main window reminds you they exist. Reminds you there are other things you need to pay attention to as part of whatever you're busy doing.
Maybe teenage you just never did anything complicated enough to need that ambient awareness of other parts of a project.
> Panels use screen space more efficiently and are a more elegant way to multitask than normal windows.
This is flat out false for me. I generally have 3-6 windows open on my main monitor that I am actively using. Some of them are only partially visible, but still very useful. One takes up 75% of my screen, but I still want to see information in the others while I'm using it.
Looks like the happy marriage of DesqView and Markdown. I'm very here for this, I am sick to the back teeth of having 16:9 monitor and a 4:3 desktop paradigm. It grinds my gears that most of the games on my Playstation have vastly superior UIs to my workstation. Add graph extraction as a core UI function and this would be the best thing since Netscape.
No desktop productivity OS with a linear mission control style desktop switcher is going to be taken seriously by me. I have fought for total spaces for so long just because the idea of a grid makes so much more sense than a line of apps. Think about it, if you have 9 spaces do you really want to scroll 4 spaces in or 1?
While I don't like the emphasis on gestures for navigation, the whole central info panel bringing different info into once screen is nice. I also appreciate the use of tagging for file info, but I can't' see it replacing file names anytime soon. The Mac has file tagging and I rarely see people using it.
This looks a lot like an old concept piece called 10/GUI - I thought it was a cool concept then which is what made me remember. Possible plagiarism? Link: https://youtu.be/tf03YBxCyGI
Can any one explain how this is more than a tiling window manager? It seems like i3 with touch-gestures and a fancy skin. I don't say that to disparaging, and quite like tiling window managers, but don't see this as "rethinking" anything.
Looks like a combination of MacOS married with Windows and bedding with iPadOS :) .. What about over populating data on one screen? How does it affect the concentration levels?
Also, the website took just under a year to load its images, can you please optimize that first :)
My only visible interface is a borderless xterm on a plain background, and a few full-screen apps an ALT-TAB away. My colleagues treat me like the last member of an endangered species.
much of this seems to boil down to window management, which ive used to great effect in blackbox for 20 years (pinning, rolling, always on top, transparency, tiling, etc)
while i dont want a mobile ui on my desktop (do you sit at a keyboard and only use your thumbs??), it is true MS and Apple have been content with their duopoly and are not innovating in this space at all (for each useful feature in either I can think of a regression).
The only reason I have windows over linux is gaming, and the only reason I have mac (book) is the hardware. Neither reason to use either OS is the OS itself!
It's an interesting combination of incredible ideas with unusable ones.
First, there's an interesting disconnect between app panes and what I actually do in practice. This concept seems to organise windows as a stack. In practice, I use them more tree-like. I already notice an annoyance with Android's stack-like app switching, which basically seems to implement what is proposed here, and I expect it to be worse on desktop.
Furthermore, it seems like a poor fit for larger monitors. On my 32" 4K monitor, I rarely want full-height windows: having half-height is more than enough, which enables me to easily have 4 or 6 windows open at the same time. With full-height panes, this is not possible.
Second, app control, app menu, and context menu: great ideas, and have been tried by various window managers.
But the real trouble is in the other ideas.
The finder is nice at first glance - it would be a great addition to file managers, and a large portion of it already exists. But it proposes replacing them, which seems like a really bad idea to me. It is unclear to me how this is supposed to work with large amounts of content: how is this supposed to work with tens or hundreds of thousands of files? Either you end up adding dozens of tags to every file, or the tags devolve into a path-like structure, defeating the purpose. Additionally, it seems lacking in discoverability. It might work to discover a project folder, or a file within a project, but I'm doubtful that full-filesystem would work. It does solve a real problem, though: I often try to put files in a logical directory structure, but there's often more than one possibility. I'd like pictures to be stored by date, but also accessible by subject and context. The same goes for a lot of other documents. Being able to tag a single file or a directory would be great, but it doesn't replace folders. The same applies to email, bookmarks, contacts, books, and all the other stuff mentioned.
Eye tracking is a bit similar. It might feel magical the few times it actually works, but seems quite useless when we still need a mouse for precision stuff. Furthermore, as I'm typing this, I'm reading text in another window as a reference. Eye tracking would completely break that. Focus Mode sounds incredibly distracting to me, Just Type is actually harmful, dismissing notifications after a single look sounds great at first, but is actually harmful for those notifications which require action at a point in the near future. It's a fun concept, but it sounds more like a solution looking for a problem. A mouse is always going to be vastly superior.
And voice control? Seriously? In my experience, every single voice interface is absolutely horrible and incredibly distracting. Unless someone manages to get Artificial General Intelligence working, it's probably going to remain nothing more than a very nice gimmick, not a core concept of a productive UI. Furthermore, how's this going to work in office spaces or libraries? Calls are already bad enough, forcing everyone to talk non-stop to their computer is going to make it impossible to actually do anything productive.
To conclude: it seems like all the good stuff has already been (mostly) implemented!
> It's an interesting combination of incredible ideas with unusable ones
haha yes, I agree.
> It is unclear to me how this is supposed to work with large amounts of content: how is this supposed to work with tens or hundreds of thousands of files? Either you end up adding dozens of tags to every file, or the tags devolve into a path-like structure, defeating the purpose.
This project, and other real tagging systems take this into account. It is not defeating the purpose if you end up with something like a hierarchy for certain things in your file system, and that is allowed for by nesting tags. The best system is not one or the other, it is both. A quick example: A lot of projects have a large number of files, but only a small number you care about or are actively editing/reading whatever. Imagine a hierarchical style system for containing the large mess of files, and a tagging system for having each of the ones you care about at hand. This is of course possible with symlinks, but this project attempts to allow for this kind of organization effortlessly.
> but seems quite useless when we still need a mouse for precision stuff.
again, why not both? I prefer to not use the mouse whenever I can, but I enjoy using it when precision with a pointer becomes required.
> The same applies to email, bookmarks, contacts, books, and all the other stuff mentioned.
he does scare you off a bit in that demo video when it says: "and everything is there!" I agree. There is work to be done with tagging systems, best practices and getting mind share, but it is a system that has been in the works for decades, and I believe will eventually become widespread standard alongside the falling of walled gardens. But that's besides the point.
There are cool ideas here, and while they may have mostly been implemented I look forward to the day where all these desktop innovations can be easily brought together or disabled on an individual level with great ease. I think there are orders of magnitude easier and more efficient solutions waiting for us to put them together on modern hardware.
> No, no, it isn't. Desktop is far more productive than mobile ever will be or can be.
Desktop is usually more productive when you're sitting down to work.
Desktop is probably not more productive if you're walking down the street or riding a crowded bus or subway.
It all comes down to your environment. The mobile UX is optimized for when you're mobile and can't provide the kind of higher-precision input typically required to interact with a desktop UX.
I've been looking for a UI/environment that lets me persist the state of my applications and documents across reboots, and organize them into an indefinite number of workspaces. Each workspace can be named and tagged, and accessed through scrolling through them, by name, by tag, or by URI (I'd drop the URI's into my Emacs Org mode todo). A workspace supports a contextual state of some effort I'm working upon/researching, and views a set of documents, and can branch into a hierarchy of sub-workspaces, and the workspaces are movable in a mind-map display (or an outline form if you want a less-complex mind-map).
I want specific locations/views within documents available as hyperlinks, and the state/arrangement of a workspace snapshot point-in-time available as a name, tag, or URI. Lots of this is probably doable under Linux in one of the programmable tiling window managers, with the exception of capturing any arbitrary location/view within a document as a labeled entity, I haven't run across one yet, but I can't believe I'm the only person out here that wants more sophisticated handling of complex workspaces. Right now, I'm using a clumsy, time-consuming Emacs Org mode document to keep track of what I'm doing.
Well, hn seems to have collectively shit on this project in this comment thread. I agree with some of the negatives, but I don't understand why those would be the facets that stick out in this conversation. Yes, his site text explaining motivations are pretty bad. App control is nothing new. And many gestures are similar to other systems. But criticisms based on security, or the originality of his ideas or his general use of swipe gestures are just bullshit.
It is backwards and naive to think that desktops are just fine, and computer programmers are historically the last to understand and embrace change (now here is where we could put some Alan Kay quotes, I'll start with his exasperation at our lack of a real CAD system for programming)
> "we would have something like what other engineering disciplines have in serious cad system, a serious simulation of the cad designs and a serious fab facility, to deal with the real problems of doing programming. Ivan [Sutherland] just jumped there [with sketchpad]." [1]
So lets talk about the interesting features he presented, and do ourselves the service of learning from this work.
Panels: Top to bottom for content is DIFFERENT THAN ALL MAINSTREAM OSes. He is lamenting permanent status bars, the windows ribbon, the chrome tab bar and more with this feature. And he goes on to explain alternate features to replace that functionality he moved for this goal of more vertical space. He also displayed a number of situations regarding navigation through panels which seem well designed. He factors in pinning an active window and the ability to scroll among others, and minimizing windows to reinforce spatial memory and leave breadcrumbs. He also accounts for resizing windows.
See the new c2 federated wiki for interesting uses of vertical space and breadcrumbs [2].
Tags: As another user mentioned in the comments, a lot of work has gone into the study of PIM, and tagging is quite effective. Of the three (search, hierarchy, tag) none is found to be best, but the availability of all three is important. This project does us the service of reminding us that we are generally missing that third option. This system offers all 3 options. (I'm sorry I can't find my source right now).
Search: Across all elements of personal computing (email to tabs to applications to files) is an interesting idea. Yes omniboxes have been around forever and will be, but this project pushes the idea that there are even more hooks to toss into that system.
Gaze: This is fantastic, and there are limitless opportunities. Of course its not a silver bullet, you wont be taking my tiling wm keyboard controls away from me (see that Onion video on the keyboardless apple) and obviously I don't think that way. But there are cool interactions that very few people if any have had the ability to come up with on gaze augmented PC systems.
Touch: Everyone is saying that tiling wm controls are way better. Of course they are. What percentage of PC users have tiling wms? Lets just round down to 0%. This brings that kind of efficiency to users which would otherwise never have it.
I appreciate the commentors who have looked at this project and reflected on it. I learn a lot from and really enjoy reading hn especially for the comment threads, I hope I can pay it back some with this.
>(now here is where we could put some Alan Kay quotes, I'll start with his exasperation at our lack of a real CAD system for programming)
CAD is the worst example, maybe. It requires precision in order to not crush down a bridge and typing commands with numbers gives you that in a much better way than a pure GUI.
Good point, but I think he uses the CAD example more in the context of simulation than as defending the GUI.
I spent a few years in Architectural CAD software and afterwards went to Computer Science and I was surprised in retrospect how often I was already doing commandline-like things before I knew the first thing about a shell or repl. It seems you have that experience too, I have not heard or read too many people making that observation before.
i like a lot of the concepts in this article, if someone were to build this then i'd be very happy to use it, but they seem more incremental as opposed to game changing, very tactical or utilitarian, it's like taking patterns from modern websites and applying them to the desktop, drawers, tags, cards, filters, notifications. there were some more advanced things like the eye sensor, touch, and audio, but i was expecting some groundbreaking stuff since your original premise was about desktops not changing over the last 30 years.
a lot of sci-fi movies stretch the imagination, they tend to think outside the box, i was expecting something like that, like ghost in the shell hologram displays, or displays embbeded directly in the retina.
ultimately, it comes down to convenience, finding ways to meld the computer and physical worlds such that the lines are blurred. letting the computer be an extension of oneself, kind of like cars to humans. so for example, typing is a very unnatural thing, and to some degree there's a lot of churn translating from one's brain to the keyboard and then to the computer, voice would be a more natural way of input, but imagine trying to write code using voice only or this article (gasp), my throat would be dry after the first function. so voice isnt a full supplement, but i think having to tap things in a context sensitive way would be good, borrowing from web design, less clicks the better, imagine writing code by tapping, choosing functions instead of having to write a for loop every single time, checking for errors, etc.
my feedback would be to take several steps backwards, look wholistically at the computer, and dream of how humans could better, more efficiently, and more naturally interact with them, that to me is the crux of a what a desktop represents, it's the interface between man and comouter/machine some of the solutions that you dream up could very much be based on patterns found on mobile phones or web pages, but dont let that limit you, ultimate goal is make a computer an extension of one's hand, brain, etc, just like a well built car is an extension if one's foot and hand.
kind of what i'm thinking is that we could be walking around with supplemented displays that could be toggled on/off, dont really like the smart glasses, they're bulky, cumbersome, and generally stupid looking, but almost like ar, layered on top of reality, i can see certain statistics, or necessary things followed by actions, absorbing different things from multiple sensors to supplement my view.
to me, that's where innovation needs to take place, quite honestly mac os x and windows both have voice assistants, amazon echo as well, but i dont rely on these things as much, they are not as usuable at the moment, it's more of a toy. i think visual technology is not as usable either, ar glasses, holographic displays, we have a long way to go.
and ultimately the computer needs to get smarter about understanding our needs, machine learning is a general step in the right direction, but i'm talking about being able to learn, adapt, and tie lots of things together to make decisions or recommendations without having to massage data, create data models, or choose certain algorithms.
Its a good thing, as designer you dont have to proof your concept by merit- aka, make a linux-distro and see wether it ships. No, you convince and preach to the higher ups, build up hype and then drop you creation on all those lowly creatures like a 500 pound bomb and needed that much.
Win8 - i remember, how good it looks, and how little choice it left you to use its disgusting MetroUI. If people do not use your UI, when they have a choice, maybe its bad, maybe you have thrown away years and years of learning and experience.
In grad school, my advisor told me: I can't teach you to have interesting ideas, I can only teach you which ones to pursue. It's ok to have lots of bad ideas, because that's the first step to having a few good ones. As for the author, I am impressed.