In 2016, they decided that there were at least four different authors, possibly six, according to algorithms, statistical probability and textual evidence. Findings were published in Proceedings of the National Academy of Sciences of the United States of America.The 2016 study received a lot of attention. I blogged on it with my own comments here, here, and here, and I noted comments by George Athos here. More recently, the Arad ostraca have been in the news with the discovery of additional writing on the back of ostracon 16.
But they kept thinking of other ways to explore these questions, and the TAU researchers decided to compare the algorithmic methods, which have since been refined, to the forensic approach and invited [forensic handwriting specialist Yana] Gerber to join the team.
Using her forensic methods, Gerber found that the 18 texts were written by at least 12 distinct writers with varying degrees of certainty.
You can read the technical PLOS ONE article that is the basis for the current story here.
I don't feel qualified to evaluate the technical elements of the study. I note that it made an effort to falsify the results:
The second dataset, used to validate the two algorithms, contained handwriting samples collected from 18 present-day writers of modern Hebrew. This dataset allowed us to estimate the False Positive and False Negative rates for the algorithmic methods that we employed; it can be downloaded at [42]. It will be stressed that the modern Hebrew dataset was not used to train or calibrate the algorithm for its activation on the first, ancient Hebrew dataset (or vice versa). The purposes of the modern Hebrew dataset were algorithm verification and sanity check.The results of the "sanity check" were as follows:
Modern Hebrew script experimentA 95%+ success rate looks good. However, this cross-check seems only to have been applied to the algorithms. What about the human forensic analysis? It was the latter that found the larger number of writers for the Arad ostraca. Were her results cross-checked in a similar way? I could not find an answer to that question in the article, although I may have missed it.
The Modern Hebrew experiment yielded 4.76% False Positive and 2.66% False Negative error rates. These results demonstrate the soundness of our algorithmic sequence. In fact, taking into account the 0.1 threshold, the empirical error rates may indicate “conservativeness”of our p-values estimation.
As for the broader claims, as I said in 2016, we already have good indication that literacy was widespread in the kingdom of Judah c. 600 BCE. This study gives us no reason to think otherwise. The sample is still small, but it seems to offer some confirmation of that conclusion.
Stay tuned. Our tools for asking such questions continue to improve. Cross-file under Technology Watch.
Visit PaleoJudaica daily for the latest news on ancient Judaism and the biblical world.