Hacker News new | past | comments | ask | show | jobs | submit login
Inkbase: Programmable Ink (inkandswitch.com)
675 points by infinite8s on Nov 30, 2022 | hide | past | favorite | 83 comments



This is very cool. I was actually trying to do something like this with a side project over the last few years that lets you sketch out wireframes and add in logic and various custom components.

For me there is something really powerful in drawing freely with a pencil and not being constrained by the way tools (Figma, etc) make you think.

Anyhow side project is here if anyone wants a quick way to do wireframes and add in logic, components, etc - https://roughups.com

Specific lesson on using data here https://roughups.com/learn/boxes


This is really impressive. There's no video on how the "Logic" editor works. Have you thought about linking something like QuickJS and allowing actual code?

If one could write generic code, and if the properties of the inked objects were exposed to the runtime (i.e. corner points, stroke/fill colors, rotation, etc.) your app would be really close in functionality to the linked project.


Ah yeah I seem to have not uploaded the logic video there will track it down. It’s very much a first iteration at the moment but that is a great shout on using existing libraries. I also had ambitions to connect to data sources/apis etc so hopefully coming soon with that.

My ultimate aim is to get it so that you can actually publish a sketched app to the App Store… whether apple would accept a scribbled app or not I don’t know!


Oh, I love this.

Since you have a whole section on learning:

One thing I've noticed is that, at least on physical whiteboards, most people I've worked with weren't comfortable with their handwriting and drawing skills. Even when everyone agrees that "rougher is better."

For them I used to do a quick hands-on workshop back at Pivotal Labs. It was called "Whiteboard Hacking" (please don't judge ;) for which I also published a free small manual.

The manual with a bunch of exercises is still here for anyone who feels overly critical of their handwriting and drawing skills: https://publish.obsidian.md/alexisrondeau/Attachments/Whiteb... .


> Most of our examples were built entirely on the iPad, using Inkbase’s interface. Sketchy math was not. Much of the code that runs it ... was written on a laptop.

> Building larger, more technical software systems in Inkbase becomes extremely difficult for many reasons, from the poor ergonomics of typing with an on-screen keyboard

Nobody has really solved the ergonomics problem of being able to type on a keyboard and also sketch, and have the entire system be portable and friction free.


Decent handwriting recognition would help here - not as a full substitute, but as a lightweight alternative.

Though I cannot imagine it working within the confines of the keyboard HID API.

With non-defective keyboards (and non-spazzy hands), the key presses on a keyboard are non-probabilistic; that's not the case with handwriting, where a large amount of the information regarding how to interpret text comes from the surrounding characters - including those that follow.

You'd need to deal with the shifting probabilities of text input, and without introducing user noticable latency or triggering an excess of events.

It doesn't sound impossible, but it wouldn't be easy either.


Even ipad handwriting recognition has weak points. For example, a q by itself will almost always turn into a 9. I wonder if PDAs had it right after all and we need to adapt our characters to make them more easily understood by machines.


My handwriting, because of a life of typing (this year is my 40th year of 10 finger touch typing; I was 8 when I got my certificate, so I basically skipped writing), is horrible and only I can read it. The iPad cannot, at all.


Not even I can reliably read my handwriting.


You remind me of a Windows CE PDA I had in the late 90s through early 00s [0] that had a pen stylus and consistently amazed me that it could flawlessly interpret my scrawly scribbles perfectly - I'm hard to please but it always amazed me and I felt in awe.

My experience with every hand-writing recognition facility since has been it has got worse, more opinionated, and is probably tied too tightly to grammar and spell checking - rather like what I call "destructive texting" when predictive typing on mobile devices constantly auto-replaces words after I've typed them without me noticing until a message is sent!

[0] I still have it although not powered up in a couple of years!


Out of context, a q and a 9 might be hard to distinguish; but all you need (in context) to tell them apart is where they appear vertically in the line that they're on.


How about speech to text instead?

Whiteboard use in real life is a combination of speech and sketching.


Seems an obvious if orthogonal solution might be to substitute typing with voice input designed in a way that avoids the keyboard non-optionally sliding up to cover half the screen whenever text input is selected. Recognition accuracy is pretty high these days, for me at least, plus it's only getting better each year. And transcription rate is equal to or better than typing speed.


> Nobody has really solved the ergonomics problem of being able to type on a keyboard and also sketch, and have the entire system be portable and friction free.

Writing on Obsidian and sketching on excalidraw (both with mouse and stylus) and inserting the resulting image via MD is a nice workflow for me. I like it.


+1 for that workflow. This way the writing part is done in markdown, and the diagram part in Excalidraw (best tool for the job and all). If you open in split view (screen real estate permitting) you can even do both at the same time with some light context switching.


Bluetooth keyboards are a thing, right?

The ones that fit as part of a case seem ideal.


Wouldn't sketchpads (ones for artists) solve the ergonomics problem?

Might not be as portable, but certainly more comfortable.


I spent two years preparing notes and delivering lectures 4-6x a week using a sketchpad.

1. If I put the pad on the right of the keyboard, then I can type normally, but I am writing at an awkward angle. If I put the pad right in front of me, then the keyboard is too far from me, and can't type fluidly. There is this continuous tension about where to place them.

2. Pads use bluetooth or wire to connect. If I use bluetooth, then my headphones must be wired. So there is a tradeoff there. Headphone are necessary for meetings.

3. There are the non-screen sketchpads, and the screen ones. The non-screen ones require a lot of hand-eye coordination (because you have to look at your monitor while drawing on the pad). They are not as stress-free as paper. The screen ones would help a lot more, but now you have two screens showing the same thing. The pad screen to draw things and move them around with your hand/pen, and the monitor to show you what you type. Kinda stupid.

  Well, why not use just a ipad-like-tablet and both draw on it and type with a keyboard. Because drawing requires the surface to be flat, and keyboard require the screen to be verticalish so you can easily see what you are typing.
4. The software is still bad. There is still no native cross-platform sketchpad application with collaboration features built in, that will let me conduct a three hour online research meeting with dozens of pages of math. Browsers will inevitably crash, and are just slow.


> 4. The software is still bad. There is still no native cross-platform sketchpad application with collaboration features built in, that will let me conduct a three hour online research meeting with dozens of pages of math. Browsers will inevitably crash, and are just slow.

Any experience with https://drawpile.net/?


Good the online collaboration is a feature.

But this need some notion of pages or slides, so we can progress through the calculations, and later export to pdf.


> 2. Pads use bluetooth or wire to connect. If I use bluetooth, then my headphones must be wired. So there is a tradeoff there. Headphone are necessary for meetings.

Recently entered the iOS/ipadOS world and still having a lot of trouble adapting, but are you suggesting that the ipad can only connect to one bluetooth accessory at a time? Or that the bluetooth connection become unstable when too many devices are connected?


If you mean graphics tablets like the Wacom displays with the pens, no. It works fine with parametric graphics or CAD, for example, so long as I stick to simple dimensions so one hand is on the numpad while the other holds the pen but the second I need full words - like variable names in complex equations, renaming groups or layers, fuzzy searching for a brush or color by name, etc. - putting the pen back down (without losing it!) becomes too cumbersome. It either serves to interrupt the creative process or get in my way when I know what I want and just want to do it as quickly as possible.

It gets worse since touch screens are actually pretty imprecise so I end up using a 3D mouse for viewport manipulation and a regular mouse for precise navigation (nested menus!) on top of the pen and keyboard. At least one of the input devices is considered lost at any given time.


One-handed chording keyboard in the non-dominant hand?


Reading the article, it seems more about programming your sketches to react as you draw, than using sketches for programming. I think both directions are interesting, but actually the latter catches my imagination more.

Visual scripting could be a good start. Unreal's Blueprints are really good with their UI and they provide plenty of usability. Perhaps if you could draw your own nodes with the properties you want, this could create the missing link for a truly hand drawn visual programming language.


For one exploration of that direction, please see our paper Crosscut: https://www.inkandswitch.com/crosscut/



On the other hand, can one just scribble code with hand and use OCR to get it right?


Programming languages require a lot of precision (one missing symbol, improperly capitalized letter, etc will break the entire thing), whereas even the best OCR is very imprecise. That sounds like a really bad combination.

On the other hand, maybe a specialized programming language optimized for that could work. The blueprints idea seems like a good concept to start with.


It would be nice to be able to basically write an equation and have it evaluate. Even a pretty complicated one…

A full programming language would be interesting but pretty alien. As someone who for whatever reason tends to end up with lots of super/sub/subsubscripts (sometimes with multiple dimensions in each!) — variables with, like, more than 6 letters are basically a nightmare when writing by hand. I can’t imagine writing at least with typical variable names, by hand. Although maybe a programming language that looked more like prose would be possible.


>It would be nice to be able to basically write an equation and have it evaluate. Even a pretty complicated one…

I heard mathpix is basically that, math via ocr to solution (haven't used it personally).


That was definitely in my mind. There are dyslexic fonts to help people not confuse letters. We want a programming language whose structure is fairly immune to small mistakes.

This is helped by the fact that inkbase has a domain-specific language, which does not have to be generally expressive, and where often short snippets are to be written.


If you prefer watching a talk, I can't recommend their Strange Loop 2022 talk enough: https://www.youtube.com/watch?v=ifYuvgXZ108


Odd thing: opening that video in the YouTube app on my iPhone makes the phone scorching hot below the camera in the back, and the crashes YouTube. Other videos from the blog post - YouTube or not - work just fine.


The video is encoded in VP9, is your iOS up to date? ( since the hardware supports it )


Obligatory warning: Strange Loop 2023 will be the last. If you haven’t been, it’s a good excuse to go.


This is so incredibly cool. I love everything about what Ink and Switch is doing here - the what, the why, and the how. Even the presentation -- the website post per se -- is just gorgeous. It'll take me time to absorb everything in it, but it's already become the first thing in years to make me think I might want an iPad (vs my trusty reMarkable2). Amazing work, bravo, thank you for sharing. This is precisely the kind of thing that helps remind me that excellence and craftsmanship and the high-minded diligent pursuit of worthwhile innovation are still to be found and celebrated.


The prior art section should be a reminder that we haven’t progressed as far as some might think. I’ll watch Doug Engelbart’s demo every year or so to remind myself of that.


Modern operating systems are missing a monumental opportunity for turning computers into transformative educational tools. Not only have we not progressed since the visions of Engelbart and Alan Kay, we've regressed in many ways. Mobile OSs are walled gardens mainly suited for content consumption and mining user data to sell to advertisers. This is now even spreading to desktop OSs, where most of the time it feels like the computer is using you, rather than the other way around.

Despite of how much computers have revolutionized every aspect of our lives, they're awfully primitive at being educational. Think about it: our best online knowledge resource is the equivalent of a classic encyclopedia, with static text, and some multimedia features. There are plenty of free lectures available, but they're limited to video recordings and text. Content is rarely interactive, or presented in a way to guide learning via experimentation. There are some worthwhile resources that do this right, but they're either not very accessible or mainstream.

Imagine where we could be as a civilization if operating systems were built with education in mind. I'm not smart enough to design such systems, but having seen what some of the brightest minds in our industry have come up with decades ago, I can't help but feel underwhelmed, to say the least, by what we have today.


I've worked in education (as a teacher), and in edtech, trying to build such systems. I used to share some of your views and frustrations.

Your post brings 2 thoughts to my mind:

1) I disagree with your characterization of "our best online knowledge resource". I assume you're talking about Wikipedia, sure that's one way to characterize it (albeit not the most genuine IMO). But Wikipedia is a tiny part of the picture; we also have YouTube (with millions of creators from all cultures/languages/fields), Khan Academy, the Internet Archive, your local library, and countless other resources. That's the beauty of the internet: it's decentralized. No centralized service can fulfill every single knowledge need because knowledge gets formed when an individual has to integrate multiple sources of information and reconcile it with their experience of the world (sorry, fans of Young Lady's Illustrated Primer; it was always a literary illusion). Otherwise it's not much more than just regurgitating propaganda.

2) Ultimately, software doesn't really matter for education. What matters is that kids have a physically safe (heated/cooled, with access to food, clean restrooms, etc) facility, and are under the guidance of teachers who are in an environment conducive to quality teaching (eg classes of reasonable sizes, administrators that don't micromanage them, etc). I've seen this "we just need the right interactive software to fix education!" fallacy so many times amongst tech people. Sure, in the hands of competent teachers who have the above, tech & software can enhance learning. But it's not the fundamental piece. And many, many schools (in the US and in many "developed" countries) lack those fundamentals. If you want to "imagine where we could be as a civilization", that's where you have to improve things.

Sorry, I know "it's not about the tech, it's about the people and basic physical environment" is a terribly boring answer for a hacker. But spend some time teaching, or building fancy software that ultimately doesn't do much for student outcomes, and I suspect you will observe the same.


I appreciate your reply, as you clearly have more experience than me with this topic, but bear with me if I push back on some of your points.

> Wikipedia is a tiny part of the picture; we also have YouTube (with millions of creators from all cultures/languages/fields), Khan Academy, the Internet Archive, your local library, and countless other resources

That's precisely my point. YouTube is audiovisual, something we've had since the dawn of television. Sure, the content has exploded in diversity (which has its own problems of curation), but it's still a fundamentally consumption based, one way, zero interaction, learning experience.

Khan Academy and other MOOCs are also audiovisual and text based, and at best, have communication capabilities with the instructor and other students. This is attempting to bring the traditional school experience to the virtual world, and is hardly revolutionary.

The Internet Archive, while vast in content, is still only audiovisual, and has the same consumption and curation problems.

If we settle for computers being a mere portal to traditional resources, then, yes, there is vast knowledge to be acquired. And this has been invaluable for many people, myself included.

But computers are capable of so much more if we step outside the boundaries of traditional teaching. They allow content to be interactive, linked, remixed and experimented with in ways that weren't possible before. For the developing mind of a child, this allows entirely new ways of making knowledge accessible and entertaining. Sure, there have been many attempts at this, and the edtech sector is huge, but none of this is fundamental to how computers are used. At the end of the day, most children will be more drawn to endless media consumption than an educational app, and our OSs are optimized for the former rather than the latter.

It's difficult to imagine what such a system could look like today, or the repercussions it could have on education, but I find Alan Kay's ideas very visionary in this sense.

> Ultimately, software doesn't really matter for education. What matters is that kids have a physically safe (heated/cooled, with access to food, clean restrooms, etc) facility, and are under the guidance of teachers who are in an environment conducive to quality teaching

Sure, this is the traditional school environment. While that certainly helps, and it's a crime that teachers are so underappreciated and underpaid, that system is broken in many ways. From uninspired teachers that fail to raise interest in learning, via teaching practices that encourage memorization of concepts rather than curiosity and experimentation, to the idea that everyone learns the same way or at the same pace, to corruption, bullying, and the list goes on and on.

We have the technology to revolutionize every aspect of our lives, yet when it comes to education, we're still relying on traditional methods. So I disagree that software doesn't really matter. All the software _attempts_ we've had so far haven't made a breakthrough, but the potential is there.

As examples of the good kind of software, take a look at the articles by Bartosz Ciechanowski[1]. Or in a more commercial sense, brilliant.org. These are isolated examples of what computers can do, but imagine if the same operating system you're reading this on fundamentally worked in a different way, to allow free-form experimentation with concepts in ways we've never seen before. Then imagine that system connected to the modern internet, where billions of people are doing the same thing, and what that interaction could mean for developing new ideas. This goes beyond simple screen and document sharing that we find sophisticated today. And, in many ways, it's unimaginable precisely because it's so far from what we can do today.

[1]: https://ciechanow.ski/


This is all very fertile territory, it's always good to engage!

I get your point about YouTube videos or Wikipedia pages being "more of the same", in a way. There's a kernel of truth to it, but I don't think that's entirely fair - just the fact that Wikipedia has hyperlinks makes it already immensely more valuable as a tool - and of a fundamentally different medium - than a traditional encyclopedia. Same for animated/dynamic/interactive graphics that are trivial to embed in a webpage but impossible to represent on paper or on a TV screen.

But point taken, it feels like computers somehow ~should~ enable something more radical. When pushing for the Macintosh in education in the 80s, Steve Jobs used to talk about how computers might one day enable students to ask questions directly to Socrates about what he wrote and have them answered, rather than not being able to engage with a text beyond reading it. I think that's what you're getting at.

Now what is really interesting about this is that it gets to the core of what we (think we) want from teaching - having a perfectly patient interlocutor to whom we can ask questions, who can clarify misunderstandings, guide our attention, etc. Maybe this is something that we'll be able to build given recent breakthroughs around language nets. I don't think we're anywhere near but it's an interesting lead for sure. A "Socrates chat bot" that could meaningfully answer questions and clarify confusion about what he meant would be very impressive.

Bartosz' work is utterly fantastic, and is part of a broader movement termed by some as "explorables" (https://explorabl.es). Explorables also (unsurprisingly) have their roots in the early days of computer science, where a handful of computer scientists and educators saw the formidable synergy between constructivist approaches to pedagogy, and software ("Mindstorms" by Papert is a seminal text here). There's plenty of cool work in that field from the last half century (which I contributed to in my own minute way when I was in grad school). That general idea - give students computer models to manipulate so they can intuitively develop a mental model for things! - has lots of work behind it.

But does that work effectively scale out, and translate to better student outcomes at a societal level? I don't want to say it's a big resounding no, but... it's not encouraging. We're certainly way past the optimism of the early 2000s OLPC when we thought that all we had to do was give students a laptop loaded with educational software to "fix" education.

To follow one of your points, the explorables website linked above has hundreds of them listed - if they could just be handed off to students and suddenly dramatically improve outcomes, teachers would certainly be doing that.

So we're back to our original question - if this is all stuff we've been doing and exploring for half a century, why isn't it more widespread? Why hasn't it meaningfully improved our issues? [0] Is it because we haven't done enough of it, because we're missing some key insights? Or is it because maybe it doesn't solve the problem in as a fundamental way as we would hope?

And that's where I tend to fall more into the latter camp - education is fundamentally a social process, learning environment matters a lot, students are not always going to be receptive and the role of the teacher is also knowing how to handle that. An adult can tell you "stop screwing around" in a way that computers (or an ideal Socrates chatbot) can't - that's also "education".

A 3rd grade teacher's biggest challenges lie more with keeping their (often oversized) class focused, teaching all the points they need to get to, trying to have a meaningful impact - any impact - on the students who come to school hungry or improperly clothed or fundamentally opposed to learning anything [1] because of a crappy family situation - than a need for more interactive materials.

Here's a nice short video I recently saw of a great math teacher in action; I encourage you to watch it.

https://www.youcubed.org/resources/summer-math-camp-the-dot-...

The value of the teaching here is not so much the content, which could easily be summarized in a sentence and few pictures. The pedagogical value here all comes from the teacher, and how she manages the class, gives everyone a voice, reinterprets their answer in the context of the original question and what she wants to demonstrate, etc.

My question to the reader: do you think this little math exercise for middle schoolers would be as effective as a webpage, no matter how interactive? Or does its value come from the fact that it is a social, embodied, cooperative process?

[0]: One thing that has been repeatedly demonstrated to raise student achievement: giving out free lunches. https://www.maxwell.syr.edu/docs/default-source/research/cpr...

[1]: Here's a fun one I've heard from French teachers: male muslim kids who openly defy female teachers because they were taught non muslim women are not supposed to be sources of authority.


I don’t see Engelbart’s work - and all other similar pioneering work that routinely gets brought up anytime HCI folks get together to wax poetic about what could be - as reminder of how much more progress lies ahead of us.

In fact, most of the things shown in those demos can be done today - perhaps in more narrow ways, but fundamentally we have collaborative document editing, video chat, complex live drawing tools, multi stream video editing, handwriting recognition, etc etc etc. All operating at scales that could only be dreamt of in these early days.

What it brings to my mind though is that software doesn’t exist in a vacuum. Writing software requires many hours of human effort, maintaining it even more so. Sustained, focused, organized human effort requires funding of some sort [0]. Software exists to solve problems, and in our globalized capitalistic economies, it means the value of software does not lie in reaching some paragon of pure academic composability/extensibility, but in solving concrete problems for people while meeting some arbitrary costs/tradeoffs.

This is why those “tools for thoughts” demos seem to always rehash the same ideas and get stuck circling around the same drain that Engelbart & Kay & friends charted 50 years ago; in the meantime, some industrial company you’ve never heard of is paying a few consultants big bucks to come up with “boring” Excel spreadsheets that are just as much “tools for thought” as anything else that humans use.

Now, am I satisfied with this state of affairs, and would I love to see what models for writing and maintaining software could exist in a non capitalistic culture? Absolutely not, and absolutely.

But that seems to me to be more the root cause of why we’re still chasing the Engelbart mirage over half a century later, rather than some fundamental/conceptual “progress” to be made.

[0] and speaking of funding, it is interesting to look at what funding environments those open ended “tools for thought” projects tend to come from; more often than not academia, or in the case of ink and switch, an independently wealthy PI. Places directly connected to money making ventures, like Xerox PARC or MSR, are short lived, and few and far between.


> In fact, most of the things shown in those demos can be done today

That's not a coincidence, as modern OSs were inspired by those demos. But isn't it a sign of a lack of progress the fact that the pinnacle of modern technology is the fact we can do the same things shown 50 years ago _slightly better_?

Why haven't there been equally revolutionizing ideas in HCI since then? We have better screens on smaller computers, perfected tapping on glass and haptic feedback, but what we can do with all this technology is awfully limiting.

Add to that the invasion of advertising in every facet of computing, perversing incentives for companies to develop technology that benefit humanity instead of exploiting it, and in many ways we've regressed.

XR seems to be the next step forward (itself not a novel idea either), but so far it seems that it will be ruled by the current tech giants, which is far from enticing.


> isn't it a sign of a lack of progress the fact that the pinnacle of modern technology is the fact we can do the same things shown 50 years ago _slightly better_?

This was implied, but thank you for capturing the spirit succinctly.

I've got an old Mac II ci that I picked up at a thrift store for $12 that came with an old version of Photoshop and an ethernet card that I keep around to remind myself of how little we've progressed. For a machine manufactured in 1989 with performance measured in MHz, it may be noticeably slower, but that's not the point. It's that we're doing the same tired twentieth century stuff we were doing 33 years ago, just slightly faster.

XR does seem a path forward, but the so-called giants you mention are selling subsidized prototypes you can't even take outside without a stern warning that you may brick the device. It's been close to a decade since the DK1 came out. You'd think they'd be past the point where legs are a new feature.


A lot of what's kept us "stuck in the 20th century" lies in just how many human details need to be accommodated to fully computerize the workflow.

Through the early 1990's, hardly anyone was doing electronic file transfer regularly in their personal workflows. While there were many examples of phoning in remotely to be updated or do certain kinds of work, it was a per-industry thing. The larger changes finally came to pass only as email and office networking gained widespread adoption. So...you didn't need computers everywhere in everyday life. They were a nice addition if you were writing frequently or you wanted a spreadsheet, but the net outcome of that was that you could run a smaller office with less secretarial staff - and not a lot more.

In the 90's and 00's, the scope expanded to cover more graphics and messaging workflows. But it was still largely 1:1 replacement of existing workflows in industry, with an import/export step that went to paper. And when you have the "go to paper" bottleneck, you lose a lot of efficiencies. Paper remained a favored technology.

It really wasn't until we had smartphones and cloud infrastructure that we could rely on "everything goes through the computer" and thus start to realize Englebart's ideas with more clarity. And that's also where the "social media" era really got going. So it's like we've barely started, in fact.

What all the prior eras in computing were like were a kind of statement of "it'll be cool when". The future was being sold in glimpses, but predominantly, the role of the computer was the one it had always had: to enhance bureaucratic functions. And the past decade has done a lot to challenge the paradigm of further enhancement towards bureaucratic legibility. In the way that urbanists joke about "just one more lane, bro" as the way to fix traffic, we can say "just one more spreadsheet, bro" has been the way we've attempted to satisfy more and more societal needs.

But there is a post-Engelbart context appearing now: instead of coding up discrete data models, we've started strapping machine learning to everything. It works marvelously and the cost of training is a fraction of the cost of custom development. And that changes the framing of what UI has to be, and thus how computers engage with learners, from a knobs-and-buttons paradigm to "whatever signals or symbols you can get a dataset for."


Well many here seem to worship an OS that is stuck in 1970's text interfaces, using 2022 hardware to run 1970 text CLI applications.

Only Apple, Google and Microsoft seem to care about pushing consumer OS experiences forward, and unfortunely they always do two steps forward, one backwards, every couple of years.


> I've got an old Mac II ci that I picked up at a thrift store for $12 that came with an old version of Photoshop and an ethernet card that I keep around to remind myself of how little we've progressed. For a machine manufactured in 1989 with performance measured in MHz, it may be noticeably slower, but that's not the point. It's that we're doing the same tired twentieth century stuff we were doing 33 years ago, just slightly faster.

On a computer the size and weight of a paper notebook, with all day battery life, a display whose color quality/resolution/refresh rate were utterly unimaginable 33 years ago, I can have dozens of layers that are up to tens of thousands of pixels in edge size, use advanced AI to generate textures like grass/clouds/etc. or segment arbitrary objects from the background, recomposite all that in real time, etc. etc. etc...

If you haven't used a computer since 33 years ago I highly recommend doing so.

If your argument is that we're still making 2D pictures, well we've been making that for a few thousands (if not tens of thousands) of years. If you want to be making weird experimental 3D/4D/nD VR/AR/xR art stuff there's lots of great tooling for that too (but it won't run on your 1989 Mac...)


> On a computer the size and weight of a paper notebook, with all day battery life, a display whose color quality/resolution/refresh rate were utterly unimaginable 33 years ago

Those were completely imaginable 30 years ago to anybody with a physics background. It was just a matter of time before transistors shrank down to the nm range (of course with the enormous amounts of engineering work that made it possible, but there was no physical reason it couldn't be done).


It’s not about compute, though I’d suggest reevaluating the load involved above and where that might be accomplished before comparing apples.


Not to squash your enthusiams but this has been done many times in the past and it's just never worked out. I think one reason is, except for a few programmers, the average user just doesn't want smart sketching. Sketching is simple but add all the meta tasks and suddenly it's no longer remotely simple.

Not quite the same but there was even a commerical vector illustration app that tried to add a bunch of "smart" programmable features back in the early 90s. It failed: https://www.google.com/search?q=intellidraw+adlus

Of course that doesn't mean someone won't get it right and compelling eventually. Me, I'd guess it would take some serious ML and maybe voice/gesture recongintion for it to really work for more than a few geeks.


This is absolutely amazing. I'm working on a small side project exploring programmability in a smaller design space, and this really takes those ideas to the next level – I'm definitely going to be spending days going over this with a fine-tooth comb.

Thanks for sharing!


I am also working on such a project. Any interesting resources you could point to that helps with the design of such tools?


Wow. This is the thing I've been looking for for years (ever since I got my remarkable), something that would allow me to augment my scribbles and bullet journals.

To be honest, this is something I'd be willing to pay a lot of money for...


This is fantastically cool. Similar to the e-ink cards that were posted here recently, a really neat application would be pre-programmed pen-and-paper board games that people distribute to each other, and you can then play with someone remotely; their edits to the board would show up on your device and vice-versa. Think hangman, battleships, etc. You could probably even do checkers/chess, by drawing an X over the piece you want to move & then drawing it at its new location, or something, though I don't know that this kind of interface would be better than a normal GUI. For the whimsy, perhaps.


Cool stuff, this demonstrates there's still a long way to go regarding building user interfaces

Also, I can imagine something like this would be very valuable when applied to language learning, especially for languages with ideograms


The Strange Loop 2022 talk was quite interesting to watch.



This is amazing and reminds me of Tydlig, an amazing freeform graph based calculator, that I really love http://tydligapp.com/ its not as practical to use it as i wish it was, but I still love it for what it is.

And since I noticed the authors are possibly reading...

This is simultaneously the most amazing thing I've seen all month... and I watched people send a rocket around the moon ... amazing and yet deeply deeply frustrating to read.

This is the same deeply frustrating, irritated feeling I have when I'm searching for answers to a problem and read an academic paper that talks about some algorithmic innovation or software improvement that does one thing or another that might help me solve my problem, and the entire paper is about the process and the results, with a tiny summary of the changes, and not a damn line of source code... and it really grinds my gears. It feels like I've been jerked around, my time wasted...

But through all that negative emotional stew, these two apps... Inkbase and Crosscut look positively magical, I wasn't kidding, its the most amazing thing I've seen in the last month, and possibly all year... and the idea that the authors appear to have no intention of turning either of them into actual products... that all this will (unlike Tydlig) be impossible to show other people face to face... how computers and computing can be different than spreadsheets, math, and computer code... Having read and now re-read both pages, I saw no clear reference to the future of either project, beyond highlighting what interesting aspects of future work in other projects they have taken away from each of these...

At the end of the day I know that magical things like this often on arise because the developers have complete freedom, the ability to design everything without any outside pressure, no product growth to worry about, no user feedback to answer to, no help document to write... but at the same time, every time I come across something like this, which appears so complete, so far progressed towards being a product, but just put on the shelf... It fills me with weltschmerz.

So now I have a brand new copy of https://museapp.com/ ... some weltschmerz... and the rest of my days work ahead. :-/


These experiments are not the products we think users would adopt if they held them in their hands. They are more like the studies a painter or a sculptor might do as part of planning a more ambitious piece.

It's hard to predict where this work might eventually lead (that's the nature of research) but I will just say that we continue to explore the space and we have another piece we'll be sharing soon.


This is really fun work that takes me back to the very early Newton days at Apple. It is a little sad that in the past 30 years we haven’t made more conceptual progress on what pen-based tools should be. It’s like Ivan Sutherland opened a door but we can’t seem to walk through it.

It’s not a hardware problem — pretty much these same demos could have been done (we did some of them!) in 1990 with much more primitive devices. It’s more that we haven’t found the underlying model to build on to get beyond simple demos.


Ah, I love seeing new things in this space!

I wrote a much simpler demo back in 2018 using the Apple Pencil + OCR to compile Swift code: https://twitter.com/NathanFlurry/status/980501243377344512

It's essentially just a glorified Swift AST and in no way competes with the efficiency of a traditional keyboard, but 48h of hacking scratched enough of that itch I had to build a VPL.


This is neat!

However I still think they'll have to pry my pencil and paper out of my cold, dead hands (:

Maybe I'm biased, since I've been drawing a fair amount from a young age, but as I watch these videos, and with every other technology like it I've seen, I can't help but think: "Oh, that would start to annoy me pretty quickly."

Maybe someone who grew up with a technology like this, and had a Vim-like relationship with the system (so they knew with confidence how it will dynamically act) would be able to do some really impressive stuff, though.


We created a spatial query system where the user can ask for objects in a specific region, inside a specified path, or in a general direction (e.g. “to the right of…")

Neat!


Is this app available to try anywhere, as a PoC or otherwise?


Superb. Best top-ranked submission of this year, imo.


Can you please make a newsletter to sign up for? I would love to follow your progress. I have wanted an app like this for a long time!


Here's an RSS URL from a recent Tweet: inkandswitch.com/index.xml

https://twitter.com/inkandswitch/status/1592239625573400576


We may in the future.

The RSS feed is a good long-term bet, and in the past we have posted on Twitter at @inkandswitch.


Having a programmable sketch never occurred to me - it's a really fresh take.

As someone who loves working on paper to design the structure of a system, the conversion to boilerplate structure has always been a laborious structure. Any way to move from a drawn UML type diagram to a basic class structure would be an improvement.


Any actual prototype one can check on. It seemed the one is all by them. Or I miss a link somewhere?


They mention towards the end that they haven't found a natural programming model for this sort of thing yet and are playing around with different ideas. My initial thought was that an APL-style language with OCR could be a fairly natural fit, maybe as part of a node-based thing like their Crosscut project.


Shocking especially the use of lisp. Understand this might be a strange request a simple example of game (not just math) demo … and lots of fun is key to a project. Spreadsheet. Yes. But that is business. Need to mix fun as well.

But no doubt I would try to see how it goes. Great work !!!


I see the potential but I'm not sure if that Lisp-style language is the right fit here. I'd at least add rainbow brackets so you can match up the brackets more easily. Other than that this seems really cool. I can see it being good for more technically minded people.


Part of this reminds me of metamine, which featured both iterative and declarative programming at the same time. In effect it had a magical equals that kept the left side updated, and had rules that kept things sane.


This feels like a peek into a wonderful vista but I wonder how close we are to actually seeing it in full. The context to my doubts is that the (initially world-changing) spreadsheet UI has stagnated for ages.


This is cool, as a programmer the single most important use case is hand drawn spreadsheets for business people. Make numbers interconnected and casading tweakable with "ink" in a table is very magical.


I've wanted to be able to program on my ReMarkable since I pre-ordered the RM1 way back when. If I had the money this is exactly the kind of work I'd be doing. Impressive stuff, @inkandswitch!


Geometer’s Sketchpad meets onenote. I think there’s some cool ideas there, but I’d guess there’s a few more iterations before we get a killer app like spreadsheets.


Silly detail, but I love that the picture of a "real" pen uses a fountain pen (Platinum Preppy!) with dot-grid paper.


What is it? Suggest you put a two line TLDR summary on top of the article. I spent around 3 mins in the web page but could not comprehend it fully.


Rule Nr 1: never start reading a paper/article with no abstract/summary.


> What would be possible if hand-drawn sketches were programmable like spreadsheets?

It's at the top of the article. It's even bolded to catch your eye.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: