We have recently upgraded the algorithm in FraD.

    We have recently upgraded the algorithm in FraD.

    FraD had been using a learning model based on the conventional Convolutional Neural Network (CNN) for our previous approach. However, we have now upgraded to a learning model that utilizes the Vision Transformer (ViT) to enhance the performance of image classification.

    Earlier this year, Chat GPT has gained public attention. The machine learning architecture used for it is called Transformer, which uses a self-attention mechanism for natural language processing. ViT is a variant of the Transformer architecture that applies the self-attention mechanism to images. Unlike conventional CNNs, which employ an inductive approach known as convolution to aggregate information from local to global features, ViT learns global features right from the beginning.

    By applying this uniqueness of ViT to FraD, we anticipate that it will uncover features on fracture surfaces that were previously overlooked by the conventional FraD AI. This is expected to enhance the performance of fracture surface image classification and pave the way for incorporating additional functionalities in the future.

    We will continue to keep pace with the leading edge technologies and incorporate them into our development, focusing on improving the performance of FraD.

    If you have any questions about the results obtained using FraD, please feel free to contact us. We greatly appreciate your continued support.