One of the initial tasks artificial intelligence (AI) failed miserably at was facial recognition. It was so bad that it has created a significant grassroots effort to block all facial recognition, and IBM, which pioneered facial recognition, exited that part of the AI market.
At the core of the problem was biased data sets that had unacceptable issues with minorities and women.
We’ve learned from companies like NVIDIA that are aggressively using simulation to train self-driving cars and robotics that using simulation and related training at machine speeds can increase the accuracy of autonomous machine training significantly.
I met recently with a company called Datagen that uses synthetic people to create unbiased facial recognition programs and potentially make metaverse-based collaboration systems more effective.
Let’s explore the use of synthetic people to improve AI accuracy and create the next generation collaboration platforms:
AI’s and biased data
Clearly, we now know that biased data sets lead to embarrassingly inaccurate AIs, suggesting that market researchers who are trained to identify and eliminate bias as a matter of practice should have been brought in to create practices that would lead to less biased data sets. Using live data, it is virtually impossible to eliminate all bias, without making the data sets so large that they become unmanageable.
To correct this, the creation of synthetic humans that highlight the unique differences and don’t over-emphasize the similarities becomes an interesting way to increase the accuracy of computer vision-based efforts. These synthetic humans can be used for both training and testing. Although it is inadvisable to use the same data set for both, as that would simply ensure the data set was implemented without errors and not catch errors in the data set itself.
You can also run the synthetic data set against real data to both look for bias in the real data set and any unintended bias in the synthetic training set. With synthetic data, you can also use the result, without privacy violations, for a variety of other functions. These would include broader metaverse efforts where realistic artificial people will enhance the apparent reality of the related simulation. Say, for instance, you wanted to showcase the light coverage in the interior of a building once it was occupied. Using real images would create licensing and privacy issues, whereas using synthetic images derived from a variety of people should not.
It’s not just people
This synthetic data doesn’t have to just apply to people either.
It can be used in security systems in stores to identify shoplifting or help with automated checkout, improve hand tracking for virtual reality (VR) solutions, simulate uses for planned buildings to remove inefficiencies long before construction starts, and improve body tracking accuracy for everything from protecting drivers to improving marketing programs.
And it would be very handy for home security, in terms of identifying packages and better alerting to porch pirates. It can even be used to help with facial reconstruction after an accident, but one of the most interesting applications is with collaboration products.
Collaboration improvements
Meta is aggressively looking at metaverse-based collaboration where you are represented by an avatar. Avatars, though, can look more like cartoons than people. You can’t use the video image of someone because, in this implementation, most are wearing VR glasses, which are off-putting to everyone in the conversation. What you need is a level of accuracy more like deep fakes, where you look like you and your body and facial expressions appear realistically on your avatar.
Datagen demonstrated a far more realistic avatar technology using its computer vision algorithms coupled with eye, face, and body tracking, with a particular focus on hand tracking.
With Datagen’s technology, you shouldn’t need to use a controller, as your hand is your controller. Instead of floating around legless, your entire body is rendered in a more photorealistic way. While Datagen’s current capability is far better than some alternatives, it is still on the wrong side of the uncanny valley in my opinion. But it should improve sharply over time to the point where you can’t tell the difference between an avatar and a real person. This would allow you to freeze your appearance at your favorite age and dress digitally and be able to attend meetings in your pajamas if you want and still look like you are professionally dressed during a remote video call.
Wrapping up
Turning a video feed into actionable data that could be accurately interpreted by an AI is critical to the advancement of everything from security reporting and access technology, including facial biometrics, to autonomous machines.
Our future automation efforts will depend on us getting this right and correcting the current lack of trust for facial recognition solutions. Datagen has a set of tools that could massively increase this accuracy and benefit efforts that include far more viable metaverse-based collaboration and communications.
While young, Datagen appears to be at the forefront of improving computer vision substantially and building future tools that will help us create stronger AIs and a far more accurate metaverse.