Last week Microsoft Corp. said it would stop selling software that guesses a person’s mood by looking at their face.

The reason: It could be discriminatory. Computer vision software, which is used in self-driving cars and facial recognition, has long had issues with errors that come at the expense of women and people of color. Microsoft’s decision to halt the system entirely is one way of dealing with the problem.

But there’s another, novel approach that tech firms are exploring: training AI on "synthetic” images to make it less biased.