F.A.Q.
Frequently Asked Questions
Measurements
My measured distance values don't correspond to the image?
Issue: When conducting reconstructions and taking measurements, such as with the measurement tool or by reading numbers from a scale or ruler, you may encounter discrepancies between the measured distances and the image content. Even if a ruler is present in the image, measuring 1cm on it may yield different values on the VLT (Virtual Laboratory Toolkit). How is this possible?
Solution: To address this, ensure that you have the correct Pixels Per Inch (PPI) set for your image. If uncertain, edit the image (right-click and select "edit" or press O) and adjust the PPI. You can either manually enter the values if known, or use the ppi-measurement tool to automatically compute the resolution. Simply select the tool and choose two points in the image that are exactly 1cm apart from each other.
Explanation: Many images contain internal EXIF data that stores information about their resolution. The VLT first attempts to read this data to streamline your work. However, the resolution of the recording device (e.g., camera or scanner) does not necessarily match the resolution of the depicted objects. To ensure accurate scaling of your objects, you need a ruler or scale as a reference for 1cm in the image. The mentioned ppi-measurement tool allows you to automatically compute the object's resolution, ensuring precise measurements.
Machine Learning
Can I Use Automatic Segmentation on my Ostraka/Reliefs/Stones?
While designed in a papyrological project, the VLT is in general applicable to a whole range of (two-dimensional) fragmentary objects. Therefore, it would also be helpful if there were a method for automatic segmentation for other types of materials as well.
Unfortunately, this is currently not possible. The machines trained for the VLT so far are all based on data annotations for ancient Egyptian papyri. However, in general it is no problem to train other machines for different types of material IF there is annotated training data available. If you want to contribute by creating your own training data, leading to a segmentation machine for your material, don't hesitate to contact me.
Which Machine Learning Model Should I Use for my Segmentation?
There is no clear answer to that. In the course of my doctoral studies I have trained a couple of different machines to segment papyri and ink, also to analyse their capabilities. Machines that provide at least halfway reliable results are thus provided here as a source of potential support in your workflow. It might be worthwhile to test around with them a bit, though - sometimes different machines yield surprisingly different results.
The Segmentation Result is not Good at All! What Could be Done?
Sometimes the segmentation results for papyri turn out quite bad - parts of the papyri are not recognised, instead other elements of the image have been misread as papyrus. Part of the ink has not been seen. Why is this the case? And how can this be mitigated?
Training a machine learning model requires a large amount of manually annotated training data. For the course of my dissertation a colleague of mine and I have drawn masks for 24 different papyri which have been used to adapt the machine learning models to the papyrus setting. It is clear, however, that due to this rather scarce training material there are many configurations of papyri, writing, and backgrounds the machine has never seen before. Also, sometimes the machine has quite some problems with objects that very much look like papyrus, even though it's not - wooden frames are such an example. Therefore, the automatic segmentation may be a useful support tool, but there may be cases where even with minor corrections the result is not to your liking. In that case, you would have to fall back to a manual drawing with a digital image editing tool. In order to make the machines more robust and stable, the best way would be to have more training material available. If you are willing to provide such data, please don't hesitate to contact me.
Development
Can I suggest new features/ideas for the Virtual Light Table?
Yes, absolutely! I would be very happy for suggestions, ideas, and concepts on how to make the Virtual Light Table even more useful for a broad range of scholars. Please head to the contact section to write to me directly or create a Github issue to address some problem or idea you have regarding the software. Please be aware, however, that at the moment further development of the Virtual Light Table has secondary priority as I have to get done with my ongoing PhD thesis. This might result in longer development circles.