Publishing a tablet is much more than identifying the characters. There's often a step of "join", a kind of 3D puzzle where you try to join fragments of a broken tablet. And of course you have to provide the archaeological context. I heard several French assyriologists describe this whole work as long and tedious, especially for artefacts that were extracted long ago and have less context.
As far as I know, OCR for handwritten texts on paper gives poor results. A cuneiform text on a tablet is much harder: the surface is not flat, the visibility of marks is highly dependent on the lightning, cracks often alter the characters... For instance, when the writer misjudged the length of its text, he can write on the sides of the tablet, or the back side, usually more convex. Even producing a good series of photos of a tablet is hard work.
Meta: The web site has the full article in one page, but it's split into 6 parts with clicks and scrolls up/down required. Why do people try hard to break simple things? Fortunately, the "Reader View" in my browser provided a merged view so that I could read normally, without patching the HTML with DevTools.
A few years ago I obtained some scans of records from WWI for my grandfather's unit on the Western Front. All the reports were written in English, but using elaborate 'copperplate' script, which I imagine is very difficult for automatic OCR - it was difficult for me to read as a native English speaker and reader - imagine writing like that in some wet shell-straddled trench.
Once tablets are reconstructed, perhaps releasing some kind of 3D scans (raw laser and meshes) for an open competition, with decent prize could be productive (like the prize for the Herculaneum scrolls in the news this week).
The situation is even worse for cuneiform. With English, you're looking at approximately 70 glyphs including upper case, lower case, digits, and punctuation. Cuneiform throws you into the hundreds of glyphs. And their forms change over time, often drastically.
For 3D scanning, Reflectance Transformation Imaging is pretty cheap, easy, and popular for imaging tablets.
This feels like an area where synthetic data can help a lot. It should be fairly "easy" to generate cuneiform-like characters, render them on procedurally generated clay tablets, break tablets using a physics engine and render the pieces in different angles. Training a model on recognizing puzzle pieces with this data would be pretty feasible too.
weird question - I remember hearing that pre-writing societies have 'memory objects' that use a series of symbols to help prompt memorization of an epic poem or the like when held in one's hands. Might cuneiform been intended to be 'read' more like braille, by touch?
As far as I know, OCR for handwritten texts on paper gives poor results. A cuneiform text on a tablet is much harder: the surface is not flat, the visibility of marks is highly dependent on the lightning, cracks often alter the characters... For instance, when the writer misjudged the length of its text, he can write on the sides of the tablet, or the back side, usually more convex. Even producing a good series of photos of a tablet is hard work.
Meta: The web site has the full article in one page, but it's split into 6 parts with clicks and scrolls up/down required. Why do people try hard to break simple things? Fortunately, the "Reader View" in my browser provided a merged view so that I could read normally, without patching the HTML with DevTools.