For some time now I’ve been thinking about a very interesting image:

No real description possible, sorry.
A very deceptive bowl.
Image credit: https://twitter.com/AkiyoshiKitaoka/status/836382313160171521

At first glance it appears somewhat “normal”.1 Some red strawberries in a bowl with a cyan cast on the background, right? Well, yes and no. That’s certainly what we see. But there is a big problem: there are no reddish pixels in the entire image! The pixels making up the strawberries that appear to be reddish are actually neutral gray! Pull up the image in the image editor of your choice and see for yourself.

The phenomenon causing this is color constancy. It seems we’ve evolved to be able to compensate for different colors of light that light up the objects you see (the sun being different colors at different times of the day for example) and that the compensatory ability is subconsciously part of our color vision.

And that got me thinking: do computer vision algorithms make the same distinctions? I suppose if the training data used to learn had the same object in different lightings it would be “encouraged” to, but that’s not necessity. And performing the right color compensation requires looking at the image on a global level, which is the sort of thing difficult to currently train AI algorithms to do.

Anyway, I’m currently working on code to perform this and other transformations to see what other interesting visual effects can be generated. I’ll post more about it once it’s ready and my perfectionism has abated…

PS: If you’d like to learn more about the discovery of this effect and images embodying it, check out Wendy Carlos’s page on it.


  1. Assuming normative vision. If you’re colorblind I have no idea what it looks like, sorry.