for those decrying the limited use of 2d barcodes in the US or in europe, you've never set foot in a manufacturing environment.
2d barcodes are used for cradle-to-grave inventory tracking by systems like Glovia and SAP. everything from transmission gears to soda pop are tagged during manufacture with numerous 2d barcodes that change at different stations, and at different times.
it doesnt matter what your new standard claims to achieve, it is useless without industry adoption. Toyota, Honda, Yamaha, Ford, Dell, and countless other manufacturing titans have invested collective billions into the QR implementations of their factories and arent going to re-tool just because your standard has colors.
QR is also massively resilient to failure. Try this experiment: cut a QR code in half, try to read it. It will succeed. This resilience is pivotal in harsh environments and especially during international/trans oceanic shipping.
black and white QR can be engraved on shipping containers and is resistant to corrosion, solvents, and the environment around it. QR codes are regularly burned at thousands of degrees as part of the heat treatment process for hardened steel, and still perform. Vision systems for forklifts and robotic cranes at shipping yards (most are robotic yards these days) rely on near/far vision systems developed by Intermec and other companies to read a code the size of a postage stamp from up to 150 meters away.
2D barcodes such as QR and datamatrix certainly have their place, but I wouldn't say they're "massively resilient" to failure.
The codes use reed-solomon ECC and the symbology spec has different levels of error tolerance ranging up to a max of ~30% errors. That's certainly robust but it still requires a controlled optical setup to avoid operator frustration. Sometimes, in some applications, a nice big 1D barcode is better. Your 150 meter postage stamp example is an extreme use case that requires a lot of setup to work properly.
This color barcode appears to be an effort from a respected institution to increase information density. It doesn't have to be immediately accepted by major manufacturers to succeed. It just has to solve some problems for some applications. I think it's a compelling new symbology.
It is already in the works. In 3-5 years every big box store product will have UV QR codes on every side of external packaging that encode the existing product barcode plus a unique per item ID.
This is being done to improve self check out speed (no more having to find the bar code) and help with tracking receipt-less returns (this box of cereal was purchased at our store).
>it doesnt matter what your new standard claims to achieve, it is useless without industry adoption.
You have no idea what this person's intentions are, and calling their work useless because Toyota, Honda, Yamaha, Ford, and Dell don't use it is obnoxious.
> ... rely on near/far vision systems developed by Intermec and other companies to read a code the size of a postage stamp from up to 150 meters away.
Does anyone know of these super long range scanners (e.g, 150 m)?
Looking through Honeywell's products, the Granit 1280i [1] claims 16.5 m, which is impressive but not the 150 m mentioned above. I'm curious how aiming is even possible with what I would assume to be an incredibly narrow field of view.
Just remember there are sensors like the CMV12000 that spew out over 3 billion pixels per second @10bit/pixel and, sufficient light assumed, subpixel motion blur down to iirc about 50% linear overlap or so about. (i.e., only about 50% of the captured pixels have not been captured already.)
Assuming 32 by 32 pixels with a size of 1mm by 1mm, you get a field of view of about 2 by 3 meters or smaller if you want to skip exotic monolithic decoders that integrate de-aliasing into the error correction decoder.
Below about 1.2 sensor pixel per code pixel it stops being fun and you try to find a way to get more magnification.
So, at 150 meters distance, assuming a cube for simplicity, and 5 square meters FOV. We have a cube of 300 m side length. That cube has a surface area of 540 000 m^2, which is just ~100k times the FOV. The 300 frames per second you could get out of the sensor would get this done in about 20 minutes, so yes, it's a lot, but you can reduce to 30% assuming the height difference is approximately known.
Then you can probably get another 10x speed by restricting the angle in which anything interesting could happen, and you get 3% of 20 minutes/1000 seconds. That's half a minute, if you can only tell the scanner that it's about there, with a precision of "between one and two 'o clock, I'm sure".
And that's <10k$ hardware I'm speaking of (not including the cost to get a 2k$ FPGA to extract 2D-barcodes from it's video feed), in 3-digit quantities.
I suppose that color-based codes have more density per "pixel" (2-3 bits), so they can be used to encode relatively large amounts of information. Try encoding 1kb as a QR code; it will become pretty unwieldy, if it's possible at all. Think about the size of a reasonable cryptographic public key; would it be nice to have it optically readable sometimes?
Tagging things during manufacture is a different application area, with different constraints and requirements. Bar codes are not gone because QR codes are here, QR codes won't be gone because color codes appear.
A B/W randomart image which uses alphanumeric and special characters, like the one used in ssh-keygen itself when generating keys, would be enough. With current tech, Adding colors is just a recipe for disaster in environments which need maximum reliability.
Personally, I think QR codes are just fine for manufacturing but as for encodings meant for end-users, I think we should be focusing on making them more human-readable or at least visually comparable, without sacrificing bit-depth.
Still, this tech was really neat to play around with and I hope something good comes out of all the work put into it.
Printing a 400 byte payload is pretty easy, but printing a 400 byte payload that's readable at high-speed(100 scans per second) is quite a different story. You'll find that in many usage scenarios the practical upper limit for a data payload is determined less by the symbology and more by the speed and reliability at which the symbology can be read by the equipment. Many thermal printers can do 200dpi or better which would allow you to print a QR code at maximum capacity into an area a little less than an inch, but you'll be hard-pressed to find a device which can scan those barcodes at a reasonable decoding speed and reliability.
You're much better off using the QR code to encode a unique key which points to a record of the desired size in some database in a networked system.
Edit: this is actually a discussion I've had in a professional context pretty recently.
> encode a unique key which points to a record of the desired size in some database
I'm sure you understand why this is not going to work the same way as reading the actual public key. Not only it needs a network, but it also is susceptible to stealing the values from the database by just generating the (short) unique keys.
I just made a short (2048 bit) key pair, and made the public key into a QR Code (as ascii armoured text, so it could have been more dense in straight binary)
I could only get it into a 141x141 by dropping error correction right down to 7%, but under ideal conditions (from my monitor to an iPhone) it reads reliably, as shown. I tried a 4096 bit key, but I couldn't (as text) get it into even a 171x171 QR Code at minimum error correction without truncating (and even in ideal conditions, my phone won't reliably read 171x171 resolution QR codes...)
RSA keys are significantly larger than elliptic curve keys, so for something like putting a pubkey in a QR code, you're much better off going with a 256 bit elliptic curve key than any other hacks to try shave bytes off encoding RSA keys.
Some friends and I experimented with this when running a CTF a few years back. We figured out that setting some of the Chinese Remainder Theorem values in the private key to zero, we could stuff it into a QR code and read it when printed using a receipt printer.
I'm not talking about cryptographic keys, I'm talking about unique keys. Like a UUID or a serial number.
A barcode is not often used as the sole storage medium in an application. Typically it's used in conjunction with a piece of software that can connect to a database. In a manufacturing setting this could be a piece of shop floor control software. In that setting the barcode doesn't encode any information that could change in the database. It is instead used to encode a machine-readable identifier for the material that it is labeling.
> Think about the size of a reasonable cryptographic public key; would it be nice to have it optically readable sometimes?
An ECDSA key would be 256 bits or 32 bytes long. At low error correction, it can fit into QR Code version 2 (21×21 modules). Or if you're thinking of RSA, then a 2048-bit key would fit in version 10 (57×57).
To copy a comment from a Github issue [1], comparing the different implementations:
- CRONTO, PM-Code and HCCB and AuthPaper are proprietary. CRONTO has attracted some usage mainly by banks, PM-Code looks like vaporware, AuthPaper seems inactive and HCCB is dead [2].
- CobraKing seems to be a 2012 research project, unmaintained
- HCC2D seems to be mainly academic exploration
Papers and results similar to HCC2D pop up periodically (i.e. [3]), but unfortunately nobody has released any (experimental) source code. It quite frustrating. For example, AuthPaper implemented their solution on top of ZXing, but they never contributed back. They apparently thought they could make some money out of it, but now that their venture is inactive, their ZXing modification is in limbo as well.
From my one-hour research, that makes JAB Code the only actually FOSS implementation of a high capacity 'barcode'. If you know of other (FOSS) implementations, please let me know.
I'm quite surprised JAB Code has existed for so long in a fairly production-ready state without attracting much attention. It deserves much more, looking at the rest of the field.
Thanks for doing this research! I've long dreamed of making a "print to binary to paper" backup system, but all the 2D barcodes I considered had patent encumbrances.
People here seem to think that this is someone's weekend project, not the result of serious engineering from an institute of the Fraunhofer Society (which it is).
So it is some research institutes weekend project. The process of making something an ISO standard is not as rigorous as IETF or IEEE. ISO has over 250 technical committees, cranking out useless standards all the time. Most members vote to approve whatever comes down the pipe if it doesn't impact them directly.
Looking closer at the spec you linked, it is tacked on to a standard for visual checksumming of printed documents. It isn't even moving forward on its own merits as a reasonable 2D barcode implementation.
I've just skimmed over the both specifications and haven't noticed anything that would indicate that this is a "weekend project". In fact, I'm impressed enough that I might work on an implementation in my preferred language (assuming a decent testing dataset is available).
What exactly are the technical issues that make you dismiss this project so harshly?
One of the nice 'features' of QR codes is they're instantly recognizable as QR codes due to the 3 registration marks, and so it's obvious you could pull out your phone and scan one (not that anyone does that).
JAB codes just look like a jumble of colored pixels -- it's not obvious it's encoded data, if you were to see this "in the wild".
It kind of misses another potential feature, which is registration marks could also establish a baseline for colors, which a reader/scanner could then use to compensate for differences in color reproduction (due to printing / screen settings / etc).
I can recognize those features. Sure, you need to look for them, but why would anyone care to? Surely after encountering only a few RGB codes the average person will have no trouble inferring that a jumble of colored blocks against a white background is likely to be encoded data. Same "training process" as needed for QR codes - although in this case the quiet zone is not needed, so you _could_ use these in a deliberately obscure fashion if you wanted, which isn't a shortcoming.
Yes, you're able to realize that any given RGB mess-of-pixels printed somewhere is "likely to be encoded data", but maybe, you'll think, there might be multiple standards for RGB-encoded data (there are) and so, what standard does this code obey? Is it one your phone can parse?
QR codes are able to be recognized as QR codes, which means you're able to think "I could definitely scan that", rather than thinking "that is some data there, that I could maybe read or maybe not."
It's the same as the difference between saying that data on the wire is "a TCP packet" vs saying it's "HTTP." Yes, either way, you know that you've got some packets—but if you want to parse them, you'd better be able to recognize the format of those packets.
We're thankfully not cursed with a multitude of competing standards for bilevel monochrome 2D codes (they do exist, but 90+ percent of everything out in the wild is QR). I don't think an end user is going to be competent to understand the differences between them in any case. The standard operating procedure will continue to be "point the scanner at it and hope for the best." If multiple competing standards do become popular, the most likely result is that scanners support all of them and we just eat the software bloat.
There are also multiple standards for black and white 2d bar codes.
This is backed by the Fraunhofer Society and the German government, and on tracks to become an ISO standard. It will be implemented in scanners (hardware and software).
Yes, and most people are not aware that there are multiple barcode standards, even with 1D barcodes. Ask anyone what the difference is between Interleaved 2of5, Code 39, Code128, UPC/EAN, PostNet, etc. and they'll be hard-pressed to identify any of the distinguishing characteristics that are immediately obvious to the trained eye.
Hey on page 29 does anyone see a pattern that resembles.. historical German symbology used by to represent national pride and a government party that rose to power in the 1920's? Am I the only one? Sorry, just popped out at me.
Yeah, but I'd give them the benefit of a doubt unless it's visible on actual codes or there is evidence this was intentional.
You end up with that pattern all the time when you try to efficiently pack rectangles around a different sized rectangle. I remember trying to avoid that back when placing fields in age of empires 2.
A black border with two tiny transparent, black-outlined squares next to the NE corner, one on each side. A system like this would allow different protocols and versions to use different symbols in different corners so that phones and people could recognize which it is at a glance.
they also work in two tone colors, so they don't ruin a billboard/leaflet aesthetic (even if there are a lot of reader out there that get confused by a white pattern on dark background code, even if the spec allows it)
Their "Household of the future" lab consistently shows off amazing tech that could be world changing, but the number that actually seem to come to production always seems remarkably low.
I wonder what throughput it can give when used as an animated sequence of codes? I experimented with animated QRs for data transfer last week and the real maximum I’ve achieved was around 9KB/s. https://divan.github.io/posts/animatedqr
Using colored high-capacity encoding should yield much better results, given the decoder is as fast as QR one.
Neat! I've always been a fan of these "low-tech" (not involving complicated radio protocols like Bluetooth that inevitably seem to never work) transmission protocols. I kinda wish something like what you built was more widely supported, so I could transfer files between my phone and OS X. (Which both have Bluetooth, yet it never works. So I upload to Drive.)
What if you used high capacity code like HCCB or CQR Code-9.
CQR has a capacity of 3KB per square inch [1], based on your formula of 11 frames per second, you could easily do 33KB/s.
And that is just a square inch, I think you could reach pretty good transfer speeds with higher resolution and wider dimensions.
Speaking of screen-to-watch data transfer, the Timex Data Link watch somehow accomplished this in the mid-90s. There must have been some dark magic to make it work on a watch-battery powered processor at the time.
Unlikely. I skimmed through both patents [1][2], and I don't recall any numbers on throughput. But the general principle is super cool – they analyze the change in luminance of the picture, and chrominance can be whatever you wish (so they can replace particle cloud with any custom animation). The idea is that human eye is much more perceptible to changes in color than to changes in luminance, but for image processing software it's quite different - they can decode luminance changes quite reliably :)
So my guess is that Apple's approach is optimized for coolness, rather than transfer speed.
My bank uses some colored pixelcode not unlike this one. I scan a code on my screen using a little device that I insert the bank card into.
It's cool, it works well and it's fancy, but every time I do banking I have to turn off my computer's "night light" mode.
I assume lots of people use color filters for various good reasons. Are these codes resisent to that? Else I'd truly prefer oldschool monochrome QR codes, to be honest.
This is pretty easily fixed by doing white balance correction. One piece of the barcode is always kept white, and it is used for color calibration. assuming the color transformation is constant across the entire code (which is the case in your example) you can undo some pretty large color shifts.
I've seen worse: some machine had a built-in camera recognizing QR code, but it couldn't really control exposure so I had to manually set my phone's brightness low to get the code detected. Quick Response? Screw that.
In case anyone missed it, a very interesting link explaining how QR codes work was posted on HN a few days ago[1]. I wish there was some explanation like this on this JAB code and how exactly it gives high capacity over the b/w code.
> A JAB Code contains one master symbol and optionally multiple slave symbols. Master symbol contains four finder patterns located at the corners of the symbol, while slave symbol contains no finder pattern
Can't speak for the Americas, but here in Europe QR codes aren't used that much, I feel, except maybe for mobile payments. Compare this with China, where QR codes are in literally every store, each taxi driver has a printed one nearby and even street food cards use them to pay. I can't see this one becoming popular, given how many are printed on glossy / dirty paper (often using black and white printers as well). I am amazed though how well the standard QR code works in most cases, even with everyone adding an icon in the middle of it (and basically removing error correcting redundancy).
I think it's the same for the Americas, but just because the average person doesn't use QR codes very much that doesn't mean that such technologies aren't being used in ways that aren't widely reported. For instance, I know someone who placed small barely-visible QR codes in videos that get passed off to editors so that each clip and frame number(and other info) can be detected programmatically despite editing software stripping out metadata. I think QR is one of those things that solves very niche problems that even the average engineer might not be made aware of.
People are quick to point out "nobody uses QR" but it's not exactly fair to expect ubiquity from something like that. My response would be the same for those in this thread saying "people can't recognize JAB codes like they can with QR." Well, those who find a use for JAB codes are probably going to be the people using them 99% of the time(maybe more than QR because of data density), thus they're almost certainly know to know how to recognize them.
People are quick to point out "nobody uses QR" but it's not exactly fair
I think everyone uses them, but they just don't know it.
Pretty much every label on every consumer good (in America, at least) from cough syrup to Coca-Cola to computer chips has a tiny QR code on it. But since people don't actively engage with the code, they don't realize that it's there, and is a big part of the process of getting things from raw material to store shelves.
Another interesting use case: Crowdmark[0], a digital grading platform uses QR codes printed on test pages to match these up to the correct student when they are scanned without the need for any manual matching process.
At my work everything has a bar code on it. It seems carry have enough information.
High capacity code have probably different applications. I haven't seem them anywhere yet, but there are several products around them. I think printing a public key with such a code on a business card would be cool.
Facebook and Google use QR codes to link phones and desktops. It shows the QR code and your point your phone at it and that's it. I can't think of anything quicker. People sometimes say they're ugly. Who cares? You'd rather type a load of random crap instead?
QR code is pretty minimal in the US too. When they first hit it big, there was lots of talk, and they started to pop up all over the place, but it never really solidified in the market.
Mostly, I trust QR codes about as much as I trust random bit.ly links, i.e. not much at all.
They aren't used much in the US. I'm a bit surprised they aren't used more for storing/communicating keys. An app I was contemplating would use them to pass around ed25519 keys (public for others, private for yourself AES'd).
An eight-colour code does not seem smart for printed codes where most printers use only four inks. A colour code using cyan, magenta, yellow, white and black might be more reliable (although yellow could easily be confused with white). The PDF suggests 256-colour codes are supported, but that would mean luminosity is a factor, and I can't see how that would ever be reliable if the code is intended to be scanned with a camera.
Yellow and magenta are also generally fugitive. Meaning that, over time, due to UV exposure, the yellow will completely fade, followed by magenta, and all you're left with is cyan and black. You'll see this in shops with advertising posters in their front window that haven't been changed in a long time.
Tried a few times with the "Scan" demo on https://jabcode.org/create/ using Firefox for Android, but it appears to just get stuck displaying a loading spinner.
Seems to work in Chrome, so not sure what's wrong with the implementation to make it not work.
To add to this, I am getting vague error messages with any text of a certain length (longer than 100 characters or so) if I tweak any, or at least several, of the advanced settings.
Demo (https://jabcode.org/create/) worked on the first example I tried, then I tried using advanced settings and it threw an "Error Something went wrong" every time.
With Advanced Settings you control the format and capacity of the code, so if you specify too many characters or too high of an error correction level it will fail (with that rather unhelpful error). Try increasing the version (which adds more pixels in the X or Y direction), or adding a few slave blocks.
(The above is based on my quick exploration with it on Safari, YMMV)
Without the advanced settings, it seems to automatically grow the code as needed. You need to manually add "slave" boxes though (to the right of the sliders). The interface is nonintuitive.
It seems to stop working at particular text lengths. Sort of a bummer when you want to see how much text it can accommodate in a square, compared to a QR code.
Imagine downloading a manifesto or something from a protest sign in a newspaper or on TV.
Imagine downloading a working application from a barcode (perhaps assuming you have the "reader" app which includes a set of software libraries to do some of the heavy lifting / batteries included) -- gameboy did it. It might be useful in a disaster, or an oppressive regime scenario, but at worst, it would be fun.
3D models? Retro game console with games distributed on paper?
My imagination is limited, but I have a feeling that barcodes are something waiting to meet some newly-possible potential with the advent of handheld scanners and very high capacity barcodes.
We tried VR and PDAs and cellphones and stuff in the late 80s/early 90s, but it wasnt until the 2010s they became something really useful; I wonder if barcodes will find some new life in the future, now that more capable and available hardware exists.
Well, if your machine-readable color key fades as well, maybe it's not so much of a showstopper if the colors fade.
The primary problem I see is that so many industrial cameras today are black and white. B&W cameras can send at a higher framerate and you don't need to convert to greyscale as your first step in your algorithm, so it saves time to use b&w cameras.
Well, I hope the designer accounted for multiple colors failing to displayer properly. The pink and red are impossible to discern for me. My bank uses colored QR style codes and they're impossible to use with (very common) blue light filters active in the evening hours.
Every HN submission is treated as if it's a breakthrough discovery to herald in a new era of human society. Constructive criticism (like the parent comment of your comment) is helpful, considering that the author of JAB Code makes no mention of this obvious issue (that I could find).
This would fail if the decoding algorithm was trying to match exact RGB or YUV values. But in reality, all it has to do is disambiguate 8 (in the case of the example) colors.
on a phone with average brightness, it's okay. on a big monitor, it hurts my eyes looking at it, like green text on a red background. so i genuinely hope these never get used.
My post goes a bit into more detail on how it works, and why for instance - the given colors were chosen. It appears they too use the Lab color space for instance.
I don't think they use the Lab color space, except if you define your own colors to use in the palette, did you see a reference to it?
From their encoding spec:
> In order to optimize the decoding of JAB Code, the used colors shall be so distinguishable as possible. Therefore, the used colors shall keep a distance from each other in the RGB color space cube
> In case of 256-color-mode, the color channel R and G take eight values, 0, 36, 73, 109, 146, 182, 219 and
255, and the color channel B takes four values, 0, 85, 170 and 255, which will totally generate 256 colors.
Which is too bad, because your idea of using Lab with
ChromaTags to enable quick and accurate finding of the symbol was a really insightful technique. The decoding algorithm they suggest seems like it would be slow, and their use of RGB seems like a whole bunch of issues due to color-selective fading, shadows, damage, lighting, incorrect printer calibration, etc.
All barcodes can be printed on glossy papers, so that's not a unique challenge. Various light conditions aren't terribly difficult to account for, especially when known colors are involved (as they are here).
I wonder if the 'Master'/'Slave' terminology is necessary here? Aside from being problematic, does it really describe the setup more than 'Primary'/'Secondary'?
Agreed. It's unfortunate but understandable we still use this terminology for existing technologies... but we absolutely should not be propagating it to new ones.
Embarrassing story time: I was working in Brazil on a database, and used the Portuguese translations for "master" and "slave" in a technical meeting (I speak Portuguese). The whole room looked at me in shock, and my manager politely corrected me: "um, we just use the English terms 'master' and 'slave'... if you translate them, it's pretty racist."
My wife is Brazilian and I showed her this comment. She says the Brazilians were just giving you a hard time and that they don't find those terms offensive.
it doesn't really even describe the relationship accurately- the "slave" symbols aren't doing anything for the "master" symbols, they aren't being delegated to or controlled by the "master" symbols.
"Beacon" and "frame" would describe them better, or even "lord" and "serf" if you really need there to be a power dynamic
At a family dinner I once jokingly described my gluten-free girlfriend as "glutarded," which was our inside joke way to refer to her dietary restrictions. I forgot that my aunt's husband, who has some congenital language issues that leave him with a speech impediment, had grown up being bullied and called 'retard,' until he gasped when he heard my words. I felt horrible, and have since been more conscious with my words. I share this story here because I think it points to a similar issue--the tech world is often a homogenous environment without much representation for African Americans. Can you imagine walking into a classroom full of African American students and casually throwing around the phrase "master and slave" without evoking discomfort in that audience? If we continue to use this language comfortably it implies to me that we aren't inviting to the table the community for whom slavery is still very much an open wound.
The focus should be on building environments where those pasts can be discussed and the sounds healed. I can absolutely imagine correctly using the terms master and slave in an environment where some of the people have a darker skin tone than I have.
Correct use of words is important if those words have been misused in the past.
Is it problematic outside of the US? That said, possibly not the most immediately illustrative choice of terms; master/slave usually implies who's in charge, not a spatial relationship.
Because people cannot separate the abstract concept of slavery to the actual historical happening.
Personally, I want to be a master to my mechanical slaves (tools, dishwashers, computers, robots) and can't wait for a future where machine-slavery is so pervasive that no humans need to work anymore.
I am. Why do the washing machines keep washing pensioners' clothes, if they're dead weight to the economy? Because we (the intelligent / sentient / human civilisation) decided so.
Institutionalized care for the weaker members of society actually makes productive members more productive.
Likewise for social security and state-run unemployment insurances.
AI is coming in the form of industry automatization, in a very competitive, capitalist framework. Agriculture and resource extraction is also being automated.
Capital is literally growing a brain, and once it doesn't need us it will stop feeding us, and we'll be powerless against it because we are so dependent on it. Think of a spider shedding its dead skin after molting.
Fundamentally, demand is survival instinct, which develops under Darwinian pressure. B2B is already growing faster than B2C. Wages are globally stagnating while capital keeps on growing. The industry as a self-maximizer can be its own consumer.
> Governments will be able to subsidise it. [...] Otherwise there will be a revolution.
This assumes that humans will still have some kind of power. Being somewhat competitive in the job market is the only power most people have today.
You're breaking a rule of this site- you should assume good faith when talking to other people on here. There is no evidence that anyone wants to be "censorious for the sake of being censorious"- even if you don't agree with the reasoning behind this discussion it doesn't change the fact that people have put forward reasons.
"We've had zero problems with master/slave terminology for decades because everyone dealing with it was cognizant enough to realize we weren't talking about human bondage."
That's a pretty big assumption. Maybe people did have a problem but felt uncomfortable voicing that opinion or maybe having an industry overwhelming white no one had the experience necessary to notice that it might be problematic.
That it's "censorship for the sake of censorship" is just the most bad faith assumption you could make. Word choices change with the culture, it's a fact of life.
I'm downvoting you not because I'm upset and offended, but because you're breaking the rules of this site with this post. One of the rules here is that you must assume "good faith" in conversation with people, but instead you're accusing people of wanting "censorship for the sake of censorship". While you may not agree with the reasons people advocate for this change, they are presenting reasons and there is no evidence that those reasons are made up simply to mask a desire to censor people.
There was a similar discussion a while back and I can't find the exact comment, but the commenter I'm paraphrasing posed this example:
What if database rows marked for deletion were marked with a "Jew" flag and to process the deleted rows you ran a "holocaust' function? Would you be able to compartmentalize those words? Would you insist that people shouldn't be offended? If you think these situations are different, why?
It's bad to "exterminate" Jews but not bad to exterminate termites (well, at least in the human value system). The "final solution" to the Jewish question is bad, but the final solution to an exam math problem is good.
Same words can mean different things in different contexts. "Master" means someone who rules, gives orders, has people working for him. "Slave" is someone or something that follows master's orders without capacity to resist or deviate from that order (well, that's the idea... we'll see how AI turns out).
Cameras only see one colour per pixel (i.e. Bayer colour filter array). Because our eyes are not so acute at resolving colour compared to luminescence this tradeoff works well.
But in this context I'm not sure at a pixel level how much 'extra' information is added by _removing_ red/green from the blue pixels, green/blue from the red etc.
In other words, if the image came from a digital camera, to actually stuff more information into the same number of pixels, colour may or may not help. In any case working with the raw CFA bayer source image would almost certainly be beneficial over interpreting the image after it has been converted to a normal RGB image (losing information in the process)
This is an interesting point, but it assumes perfect optics. If the image is slightly out of focus, I think three lower resolution images in different spectral regions is a big win.
I think it depends on the context - if you're designing the whole system from bar code to camera, I'm not sure that the color will help. But in that case you can assume close to perfect optics and I'm not sure if you would find an improvement or the opposite over a monochrome setup with better SNR.
On the other hand if you're stuck with things like mobile phone cameras, and can only control the bar code side, then Id imagine you'd see some improvements as you say.
An interesting middle ground would be if you could get the raw image from the sensor before a generic debayer algorithm gets applied.
Good luck, keep up the good work... i've worked commercially on optical barcode systems and it's a super interesting topic (tho looks like my post has been downvoted out of existence so maybe you wont see this).
I don't want to represent myself as one of the inventors! I just saw this on HN, skimmed the technical report, and found a bunch of comments on HN that could be answered with my cursory knowledge thus gained.
What I can offer is that it is now possible to get memory buffers containing "raw" sensor data from mobile phones. I've only done it on iPhone so far, but the "camera2" API on Android looks to support this as well. It only works in single-shot photo mode - I suspect there isn't the bandwidth to do 30 fps streaming, and they rely on the ISP for debayering and color space conversion in video mode.
iPhones seem to have negligible chromatic aberration in their raw output, weirdly, so that isn't a blocker for full sensor resolution grayscale imaging. Someone could exploit this to write the world's greatest mobile phone QR reader app.
2d barcodes are used for cradle-to-grave inventory tracking by systems like Glovia and SAP. everything from transmission gears to soda pop are tagged during manufacture with numerous 2d barcodes that change at different stations, and at different times.
it doesnt matter what your new standard claims to achieve, it is useless without industry adoption. Toyota, Honda, Yamaha, Ford, Dell, and countless other manufacturing titans have invested collective billions into the QR implementations of their factories and arent going to re-tool just because your standard has colors.
QR is also massively resilient to failure. Try this experiment: cut a QR code in half, try to read it. It will succeed. This resilience is pivotal in harsh environments and especially during international/trans oceanic shipping.
black and white QR can be engraved on shipping containers and is resistant to corrosion, solvents, and the environment around it. QR codes are regularly burned at thousands of degrees as part of the heat treatment process for hardened steel, and still perform. Vision systems for forklifts and robotic cranes at shipping yards (most are robotic yards these days) rely on near/far vision systems developed by Intermec and other companies to read a code the size of a postage stamp from up to 150 meters away.