Hacker News new | past | comments | ask | show | jobs | submit login

It goes both ways. I certainly hate un-needed complexity but somethings are just complex. Good examples are font rendering https://gankra.github.io/blah/text-hates-you/ and text input https://lord.io/text-editing-hates-you-too/ You think it's going to be easy. It's not. It just isn't.

Often software is too simple. I just recently got my first "Smart Lights" (my mistake). I'm using Apple Home to control them. I put 3 bulbs in a 3 bulb standing lamp. Home has no way to for me to designate those as a single lamp. To Home they are 3 lamps. The solution is supposed to be use normal bulbs and a smart switch. But that precludes being able to set the colors of the bulbs. In other words, they thought it was simple but it's not.

It gets worse. At first I thought this would be cool, but now I realize if I have a house guest, they won't be able to turn on the lights if I left them off. Arguably, if the power is cut and comes back on, maybe the lights should default to full white? At least then, in an emergency they just work. But again, designing lights that do that would be more complexity, not less.

I can even add another wrinkle. My Siri is set to Japanese. So, my English speaking house guests will not be able to ask for the lights to be turned on if I had a HomePod, which I don't. More complexity because the world is complex




> It goes both ways. I certainly hate un-needed complexity but somethings are just complex. Good examples are font rendering https://gankra.github.io/blah/text-hates-you/ and text input https://lord.io/text-editing-hates-you-too/ You think it's going to be easy. It's not. It just isn't.

This reminds me of a talk by Venkat Subramaniam, called "Don't Walk Away from Complexity, Run", here's a recording of it: https://youtu.be/4MEKu2TcEHM?t=343

In it he talked about inherent complexity and accidental complexity, the former being a part of the domain that you're working in, while the latter is introduced accidentally and isn't actually necessary to address the problem at hand.

Often people talk about managing and decreasing the accidental complexity, but what if instead you could reduce the inherent complexity by just trying to do less? For example, instead of trying to do advanced font rendering for almost every single language and writing style known to man, why not just limit yourself to ASCII, left to right typing and monospaced bitmap fonts in certain devices? I'd argue that in many cases, such as embedded devices, that's all that's actually worth doing, though seeking out ways to not drown yourself in complexity due to having limited resources to get stuff done applies elsewhere as well.

As someone else pointed out, a light switch would suffice nicely, instead of trying to write software to handle every possible use case for smart lights. Alternatively, just have an API for them and allow the people to write whatever code that they want themselves, instead of having a closed, non-extensible solution.

That said, i agree that oftentimes it's really easy to misunderstand how difficult something is, but also because you don't really care about many of the edge cases, such as right to left typing or handling kanji rendering etc.


>For example, instead of trying to do advanced font rendering for almost every single language and writing style known to man, why not just limit yourself to ASCII, left to right typing and monospaced bitmap fonts in certain devices?

That works wonderful if you never deal with peoples names and have no interest in making your device available in France, Germany (65 million and 80 million people), or Japan (100+). That is leaving a lot of money on the table.


> That works wonderful if you never deal with peoples names and have no interest in making your device available in France, Germany (65 million and 80 million people), or Japan (100+). That is leaving a lot of money on the table.

That is a valid point, but i'm thinking more along the lines of smart IoT devices and other embedded settings and there indeed you'll find that localizing your device is sometimes unprofitable and doesn't provide much value at all.

Consider why almost all of the popular programming languages use keywords in English? Because it's largely become the lingua franca of the industry. On a similar note, i'd argue that making your smart thermostat output kanji or text in german would actually be more problematic, because anyone who'd end up with the device and wouldn't know how to change the language (provided you even can change it, instead of regional skews) and would not speak those languages would find it useless, rather than someone from the aforementioned cultures, who's more likely to know at least some English.

To that end, i believe that it's possible to make devices available in France, Germany, Japan and elsewhere even if you don't localize them in the local languages there. Of course, your market penetration might be lower than when going the extra mile, though you should really consider whether it's worth it.

Furthermore, it's not like the situation with names would be utterly hopeless - romanization of names is definitely possible and limiting yourself to just ASCII actually makes your code simpler in many cases, which has historically been one of the reasons for various other systems becoming confused with Unicode and breaking in disappointing ways.

Of course, i'm speaking from my subjective experience - living in a country with a few million inhabitants, i'm always using software that's in English. When it's localized in my country's official language, i look for ways to change it back into English, because the localized terms sound weird, the translations are often either badly made, or just feel uncomfortable. This is especially true in software tools like GIMP or Inkscape, because using a non-standard language ensures that you won't be able to browse help forums or the documentation, because almost always not all of it will be translated, or the menu items and nested paths will sound so different that it will be impossible to find what you're looking for. The same applies to hardware in my eyes.

That said, there's probably a world of difference between a thermostat and a smartphone.


> For example, instead of trying to do advanced font rendering for almost every single language and writing style known to man, why not just limit yourself to ASCII, left to right typing and monospaced bitmap fonts in certain devices? I'd argue that in many cases, such as embedded devices, that's all that's actually worth doing

I'm a firmware dev, so I'll tell you how this goes. Over the next 5-15 years project requirements will change, feature creep will set in, and you'll eventually end up with a full blown font renderer. Except it will be utter garbage, impossible to maintain, and it will infest the entire codebase making it incredibly difficult to replace.

You shouldn't go for the most general solution rather you should select the solution that fits the current and near-term requirements. You had best know what the general solution is though, otherwise you're doomed to reinvent the wheel poorly.


Why should our advanced computers not be capable of handling the most basics of communication with every person on Earth? Should we also drop accessibility?

It is an easy trap to fall into. Sometimes more is actually more.


> Why should our advanced computers not be capable of handling the most basics of communication with every person on Earth?

I'd argue that ASCII in English is probably one of the more basic forms of communication with people that you can implement, due to how widely supported the character set is and how many people know English; no other pairing will get you as close to having your device be usable by a global audience than that pairing. Of course, that doesn't answer your question, so let me try again.

Because there are devices out there which simply don't need to do this. Not every computer out there is an advanced one. Not every computer out there will interface with every person on Earth. A lot of complexity would be introduced for no gain in many situations, therefore it makes sense to choose the simplest option.

For a contrast, allow me to show you GNU Unifont, which attempts to encapsulate all of Unicode BMP, the font weighs about 12 megabytes which is more memory than some devices actually have to use: http://unifoundry.com/unifont/index.html

From a different angle - it would be nice if all devices were able to do this, but there are so many different languages, types of writing and characters, that supporting all of them will be too troublesome, since the underlying programming languages, the ecosystems around them and libraries won't support them out of the box either.

Not supporting those at the lowest levels of abstraction means that you'll be trying to tack the support on later, like what moment.js attempts to do with the date and time functionality within JavaScript. And that's just a web application example, whereas in actuality these problems are probably most pronounced in embedded devices.

> Should we also drop accessibility?

This does feel like a bit of a strawman, but i'd like to suggest that many radios, thermostats and a variety of embedded/IoT devices don't have a lot of consideration given to accessibility in the first place. I can't recall the time when i could control any of those devices with my voice (apart from integrations with smart systems to control the home, though currently those are a rarity, maybe things will improve in 20 years).

I actually recall a radio once that attempted to support multiple languages on a LED display ( the kind that shows numbers typically, by lighting up segments on it, much like this: http://www.picmicrolab.com/wp-content/uploads/2017/06/Alphan... ). Let me tell you that their attempt at supporting Russian wasn't legible and as a consequence i simply couldn't figure out how to switch back to English, which was at least a little bit more readable in comparison. I fear the day when someone attempts to encode kanji in a similarly limited environment to results that actually make the device less accessible.

I might be horribly misdirected, but that's my answer to you - we shouldn't attempt to do things that aren't feasible with our current technologies, since language support is currently rotten to the core in many of them. Given that this support isn't available out of the box, you'd be pretty hard pressed to support multiple languages in your little Arduino project with a LED display, especially because attempting to do so would make you miss out on actually making it do what you want it to.

Maybe some day i'll just be able to put this in my codebase:

  printOnScreen(translate("Device not connected!", getCurrentLanguage()))
And have simple translations be made during compile time into all of the languages that the little project should feasily support (with localization files, which can be edited and then transferred over to the device alongside executables), but until something like that becomes an actual reality in all forms of computing, it's probably not worth it.

Domains in which the above comment doesn't hold entirely true:

  - desktop computing (even if translations are often poor and OS support for localization can be lacking)
  - web development (most large frameworks support plugins for localization, yet loading the localization from files outside of build time compiled ones still is not widely supported)
  - mobile computing (given that OSes like Android are pretty decent in this regard)


Actually it's not edge case.


No, a normal light switch on the wall is all you need.


^ The summarization of this entire discussion in one line is poetic.


But it doesn’t replicate the actual features people want from smart lights which is

1. Turn them off and on from my phone, or voice assistant thing.

2. Changing the color and brightness of the lights.

3. Doing 1 and 2 automatically.

Having lights change from white to yellow automatically based on the time is something I don’t want to give up now.

It’s the switch that’s the real issue. If your switch was a panel with an on and off button that controlled the smart lights and your lights were plugged into a normal outlet there would be no problem. Consumer tech is trying to retrofit in an environment that hasn’t caught up to commercial lighting systems.


Your requirements are an order of magnitude more complex than what most people expect from room lights: lights on lights off.


When talking about smart lights? No. Your “solution” is simply ignoring the requirements.


Just FYI Apple Home does allow you to group accessories together.


For the record, the above light problems are all solved within Philips's Hue - although they also needed many years and user complaints to irk out such cases.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: