Please use this identifier to cite or link to this item: http://hdl.handle.net/1946/8674
This thesis describes a system implemented to identify bone types in X-ray images of chicken fillets. The system has two key aspects, learning and judging. When learning, the system scans an image and extracts numerical information from it. The type of bone is then manually identified and matched to the statistical information.
Following the learning phase the derived data is projected into a multidimensional space where each variable of the bone is paired to a specific dimension. Each spatial dimension is then presented with a single array in code.
When judging, the system extracts the numerical information from a new image of an unspecified type. This numerical information is then equated to a location in the already filled multi-dimensional space. How inclined towards a specific type of bone the vicinity of this location is, gives information on how likely it is that the bone is of that given type.
In order to determine the relative weight of each dimension, the dimensions were merged into categories. The categories were then merged into the total outcome. Dimensions were merged into categories using methods manually determined in each case whereas when merging categories into outcome the categories weight were incremented, one after another and using that weighing the whole system was run through and the correct number of images recorded. At last he weighting combination with the best total outcome was selected.
The system correctly predicted the bone type for 41 out of 42 images, which is an accuracy of around 98%. This is a rather good result, especially considering that the system only had 42 images to learn from. These numbers must be taken with caution though because the dimensional weights were determined afterwards.