Hacker News new | past | comments | ask | show | jobs | submit login
Rethinking Forms in the Age of Tablets and Amazon S3 (bricklin.com)
85 points by dax70 on Aug 19, 2015 | hide | past | favorite | 20 comments



>>And, anyway, isn't the AT&T salesperson ignoring the need to create software

>>that would be better than the paper without which the tablet is just a

>>paperweight or video viewer?

This the biggest part. Management went out and bought iPads for our service techs, and of course they were harder to use than a desktop computer.

The problem wasn't with the iPad itself, it's the website they entered invoices in. A drop-down with 400 elements isn't a big deal when you have a 22" monitor and a mouse with a scroll wheel, but on an phone or tablet it's atrocious. One simple improvement was made, splitting it into 3 drop-downs where you narrow the item by manufacturer and then category, which ended up reducing the number to 5-12 items in the last drop-down. Another improvement was to sort them alphabetically instead of by part #.

Next up were the basic mobile/responsive enhancements. One field per row, larger fonts...


For context, this is written by Dan Bricklin, the inventor of the spreadsheet: https://en.wikipedia.org/wiki/Dan_Bricklin


Forms are hard because what you're really doing is encoding business processes. Dan started to touch on that then kind of went off on a tangent. Anybody who's worked on LOB software knows that most times managers / no single person is aware of the entire set of on-the-ground processes a business uses. To move them to a computer then is very difficult. Plus paper has very nice options for exception handling, and computers are often really bad at things like letting the person entering data make notes in the margin and changing the processing of this case based on the notes...


In my experience, moving people to computer means having to reconcile management's understanding of how people do their jobs, with how they actually do their jobs. This is especially tricky when designated outcomes are only reachable through other than designated processes.


I've been thinking about the extreme of this - free-writing on paper. I do a great deal of "programming" on paper, sketching designs, making notes, asking myself questions, making lists. The free-formness of it is critical to fast, intuitive thinking.

Humans are naturally intuitive thinkers. Rigor must be trained. We see the struggle all the time, with unsophisticated computer users trapped in some nightmare of a form that they can't understand or use effectively. A lot of communication is lost that way, and a lot of time and mental health.

Coming from the other end, I have the rigor of an experienced programmer, but also the mind of an artist. Music, photography, and other pursuits keep me deeply in touch with my intuition. Being able to think rigorously about intuition is pretty useful! Many years ago, I took jazz theory lessons. My teacher emphasized what he called four-way learning. I was to read the chords on paper, say their names out loud, watch/feel the shapes my fingers made on the piano, and listen to the sound the chord made. These overlapping inputs, he said, were vital to memorizing such complex material. And committing it to memory was the key to accessing it intuitively, on the fly - to creating complex things (in this case, music).

To this day, it's why I prefer pen and paper as my first data capture for ideas. Even if I never look at that page again (and I usually don't), the physical sensation of writing it that way, the visualizations I can create, lend power to my ability to memorize and synthesize the ideas.

So what does this have to do with tablet forms? Well, in many cases, the data we need to capture isn't rigorous or easily structured. Imagine a situation where, say, a doctor is trying to get a description of pain a patient feels. Imagine the patient can draw that pain, rather than translate to words. Wouldn't that be useful? But that's not really accessible, either with the physical limits of paper forms or the rigorous content and inputs of online forms.

So yeah, I like this article. Made me think.


This is exactly why I bought a surface 3: the stylus really helps me sketch, doodle, and think. I just wish we could think of a way better than a notepad to harness that experience.


The elephant in the room is when Apple will finally realize that people would very much like to use tablets for creative work instead of merely mindless consumption and bring out an iPad with stylus support. Post-Jobs, this should not be a problem.


The author has a nice essay on why input ought to be better, given all the resources we have available now. He has some good points. Then he shows off his app, and it's full of fields to be filled in. It also supports scribbled notes, or "ink as a type", as "The Power of PenPoint" called it back in 1991. Bricklin is still thinking "spreadsheet", but with a fancier GUI.

We can do better than that today. Far better.

First, the business apparently needs to collect some basic info about a car - license number, odometer reading, make and model, and color. Their form has 12 blanks of routine data to be filled in. The employee is using a device with a camera. So point the camera at the VIN plate or door sticker and read it. Machine reading of text is easily good enough to do that today, especially if you take a few images while the camera is looking at the VIN plate. That gives you not just the VIN, but all the make and model info, plus, if you have dealer access to vehicle data, info on any recalls, maintenance history, and other important things a service writer needs to know.

Show the tablet's camera the license plate and dashboard with the odometer reading, and let it record that data too. If this is an existing customer, you already have their personal info. If not, take an image of their driver's license or credit card and get it from there. You've now filled out most of the form without a single keystroke or scribble.

The only human input required is the customer complaint. That could be taken as audio and video, then transcribed automatically, with stills of the key points displayed and marked up, with comments by the service writer. The transcribed text could even be compressed to the essentials with something like the summarizer that used to be in Microsoft Word, specialized for talking about cars.

Some of the vehicle mechanical info, such as "Idle speed (RPM)" and such should be collected from the test device the service technician plugs into the OBD port. Parts replaced can be collected by showing the bar-coded boxes the parts come in to a camera, or maybe the parts themselves if there's a good recognizer for them.

Auto repair is a good field in which to deploy such technology. Cars have known structure and lots of stored info. Doing this for doctors and cops will be harder, but will have a big payoff.


I'll note that the author is not unaware of these ideas, he does write:

The tablet has the ability to use a camera and microphone for [..] reading barcodes and other visual recognition tasks that speed input and cut down on errors

but the kind of environment he's addressing isn't "a company developing a specialised app", it's "a generic form editor that someone currently working for a company creating forms as PDFs could work with",:

These Tablet-Optimized Forms must be able to be built relatively inexpensively and quickly, hopefully by people involved in the part of the organization that uses them. Unlike traditional marketing-oriented B2C mobile apps, $50,000-$1,500,000 development costs, and long development cycles, are out of the question.

Image recognition and OCR of a VIN plate might be better, but it's not the same kind of office skill at all.


That means turning a form into a crappy app, because it's easy. That's exactly what he's arguing against, then doing.

Bar-code reading and OCR software is available for phones.[1] There are apps which read a VIN plate by bar code and return car info. (The one for IOS is reported to lock up, but the one for Android seems to be OK.) The task is to make it simple for low-level developers to integrate that into larger systems.

[1] http://www.abbyy.com/mobile-ocr/solutions


I know this is just a detail of the example app that isn't relevant to the concept itself, so I'm sorry if this is nitpicky, but it's a pet peeve of mine and it just annoys me to no end:

In the "inspection" part of the demonstration, the user can press the "pass" or "fail" toggle or swipe the labels to mark them as passed or failed. So far so good. But the only way the status of an item is indicated when it isn't active is the text colour of the label.

It took me a second or two to realize they were coloured to reflect the status and another few seconds to figure out which one was which and what was coloured or not.

Red/green colour deficiencies affect a significant percentage of the XY-male population (thanks to having only one X chromosome). Partial green deficiency alone affects a whopping 6%. In other words, if you pick 20 men at random, you should expect at least one of them to be affected. The problem becomes even more pronounced if your audience contains a disproportionate number of men (as automotive inspectors might be, if the gender gap holds true).

It's an incredibly bad idea to rely on subtle colour clues alone to convey information. Especially with the thin font rendering in iOS and OSX the text colour is probably the worst choice (larger continuous areas of solid colours can be easier to distinguish but still present problems to individuals with more severe forms of colour deficiencies).

I'm green deficient myself, so I'm personally affected by this problem. Without the larger toggle as a reference I wouldn't be able to determine which text colour is which (i.e. I'd have to rely on luminosity alone and likely assume the darker of the two colours is red).

If you want to design an interface like this, either back the colour clue up with another visual indicator like an icon (checkmark vs cross tends to be easily distinguishable even if the meaning isn't necessarily intuitive in all cultures) or make sure the colour is applied to a large surface. Think of it like text size: the bigger the text, the easier the individual letters are to make out even to people suffering from vision deficiencies. Obvious caveat: it won't help the (colour) blind, of course. You can't improve on zero by multiplying it with a very large number.


This is all a bit reminiscent of Double Helix (https://en.wikipedia.org/wiki/Helix_(database)).

The best part of using Helix was that the people doing data entry, and the people who worked closely with them, really could and did update the forms themselves.

I like paper for a lot of things, but in software, I've always hoped we can arrive at systems where users get more of a hand in shaping the tools they use. We're not there yet.


There is a quite nice demo down the page: https://www.youtube.com/watch?v=QUpW5LXaoVU


The issue I have with that demo is that the use has to shuffle repeatedly between the keyboard at the bottom and the input at the top. Why not stick the input to the top of the keyboard?


Agreed.

But the most impressive part seems to be not the app itself, but the system used to build it.


Interesting article, but the references to and emphasis on S3 are lost on me, can someone enlighten me?


It did seem a bit odd but I think the main idea is that storing big, non-text values has traditionally been seen as hard (expensive, unreliable, prone to security hazards, etc.), not without cause given how bad most enterprise IT departments are at it. Now that it's available as a simple, highly-reliable pay-for-usage service, app designers need to reconsider that bias.


20 years ago, a durable object storage solution like S3 meant buying an EMC Centerra For über dollars and maintaining multiple data centers, etc.

Now, you need a major credit card.


Using custom input controls for different data types isn't exactly a new idea.


The bad thing is it goes against the learned/standard input systems for whatever tablet/phone you are on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: