Introduction
There are various methods available for measuring body fat percentage, though many require specialized equipment or are cumbersome to perform.
Traditional methods such as bioelectrical impedance analysis (BIA), dual-energy X-ray absorptiometry (DEXA), and skinfold calipers provide accurate results but are not always easily accessible to the general public.
While it is possible to estimate body fat visually, such assessments are inherently subjective and lack the precision needed for tracking fitness progress quantitatively.
At SENTIF, we have developed a novel AI-based system to address these limitations by using deep learning to provide a more convenient, objective, and quantifiable method for measuring body fat percentage from images.
The aim is to simplify the process of tracking fitness and optimize workout efficiency through accurate and easy-to-obtain body composition data.
Methodology
Our model, trained on over 3,000 images of male upper bodies, was tested on an additional 726 datasets,
each paired with ground truth (GT) body fat measurements derived from well-established methods including Bioelectrical Impedance (BIA), DEXA scans, and skinfold calipers.
The dataset was divided into body fat percentage groups, and a multi-class classification model was trained to estimate body fat.
The AI model classified the images into discrete body fat percentage classes, then applied a probabilistic weighting to each class.
The final body fat percentage estimate was calculated by multiplying the class probabilities by the corresponding body fat percentage value, allowing the model to generate a continuous estimate.
To evaluate the model's accuracy, we compared the estimated body fat percentage to the ground truth measurements, calculating the Mean Absolute Error (MAE).
The MAE(Mean Absolute Error) between the AI-generated estimates and the actual measurements was found to be 1.326, means that the AI’s predicted values can differ from the ground truth (GT) by approximately 1.326%p on average, indicating a high level of accuracy for the model in predicting body fat percentage from images.
This relatively low error rate underscores the model's robustness when compared to traditional methods.
Results and Discussion
As shown in the comparison graph between the ground truth and estimated values, our model demonstrated high accuracy across the majority of the dataset.
However, certain outliers were observed.
To investigate these discrepancies, we conducted a detailed analysis using GradCAM to generate heatmaps that revealed which parts of the images the AI relied on to make its predictions.
The heatmaps (excluding areas marked in blue and yellow) indicated that the model sometimes struggled with images that contained visual noise, such as hands, arms in non-standard positions, excessive body hair, tattoos, or sagging skin due to rapid weight loss.
These factors led to a reduction in the accuracy of the body fat percentage estimates, as the AI's attention was diverted from the relevant areas of the body.
Despite these occasional outliers, the model's overall performance was consistent and reliable, with minimal deviation from the ground truth values.
Conclusion
The results of our AI-based body fat percentage estimation model demonstrate its potential as a convenient tool for tracking body fat with reasonable accuracy, comparable to traditional methods.
Given the low margin of error relative to established body fat measurement techniques, this AI solution can serve as a practical option for individuals seeking to monitor their body composition without the need for specialized equipment.
Additionally, when images are taken under consistent lighting and environmental conditions, the model's accuracy improves, making it even more effective for tracking changes in body fat percentage over time.
References