I’ve been interested in cephalopod vision ever since I learned that, despite their superb appreciation for chroma (as evidenced by their ability to match the color of their surroundings as well as texture and pattern), cuttlefish eyes contain only one light-sensitive pigment. Unlike ourselves and other multichromatic animals that perceive color as a mix of activations of different-colored light receptors, cuttlefish must have another way. So while the images coming into the brain of a cuttlefish might look something like this . . .
. . . they manage to interpret the images to precisely match their surroundings and communicate colorful displays to other cuttlefish. Some time ago Stubbs and Stubbs put forth the possibility that they might use chromatic aberrations to interpret color (I discussed and simulated what that might look like in this post). What looks like random flickering in the gif above is actually simulated focusing across chromatic aberrations. [original video]. Contrary to what one might think, defocus and aberration in images isn’t “wrong.” On the contrary, if you know how to interpret them they provide a wealth of information that might allow a cuttlefish to see the world in all its chromatic glory.
Top: learned color image based on chromatic aberration stack. Middle: Neural network color reconstitution Bottom: Ground truth color image
We shouldn’t expect the cuttlefish to experience their world in fuzzy grayscale any more than we should expect humans to perceive their world in an animal version of a Bayer array, each photoreceptor individually distinguished (not to mention distracting saccades, blind spot at the optic nerve, vasculature shadowing, etc.). Instead, just like us humans, they would learn to perceive the visual data produced by their optical system in whatever way makes the most sense and is most useful.
I piped simulated cuttlefish vision images into a convolutional neural network with corresponding color images as reference. The cuttle-vision images flow through the 7 layer network and are compared to the RGB targets on the other side. I started by building a dataset of simulated images consisting of randomly placed pixel-sized colored dots. This was supposed to be the easy “toy example” I started with before moving on to real images.
Left: training input, middle: network’s attempt at reconstitution, right: target. For pixel sized color features, the convolutional kernels of the network learn to blur the target pixels into ring shapes.
Bizarrely, the network learned to interpret these images as colored donuts, usually centered around the correct location but incapable of reconstituting the original layout. Contrary to what you might expect, the simple dataset performed poorly even with many training examples and color image reconstitution improved dramatically when I switched to real images. Training on a selection of landscape images looks something like this:
Center: Ceph-O-Vision color perception. Bottom: Ground truth RGB. Top: Chromatic aberration training images (stacked as a color image for viewing)
As we saw in first example, reconstituting sparse single pixels from chromatic aberration images trains very poorly. However, the network was able to learn random patterns of larger features (offering better local context) much more effectively:
Interestingly enough, the network learns to be most sensitive to edges. You can see in the training gif above that after 1024 epochs of training, the network mainly reconstitutes pattern edges. It never learns to exactly replicate the RGB pattern, but gets pretty close. It would be interesting to use a network like this to predict what sort of optical illusions a cuttlefish might be susceptible too. This could provide a way to test the chromatic aberration hypothesis in cephalopod vision. Wikipedia Imageby Hans Hillewaert used as a mask for randomly generated color patterns.
Finally, I trained the network on some footage of a hunting cuttlefish CC BY SA John Turnbull. Training on the full video, here’s what a single frame looks like as the network updates over about a thousand training epochs:
This project is far from a finished piece, but it’s already vastly improved my intuition for how convolutional neural networks interpret images. It also provides an interesting starting point for thinking about how cuttlefish visually communicate and perceive. If you want more of the technical and unpolished details, you can follow this project’s Github repository. I have a lot of ideas on what to try next: naturally some control training with a round pupil (and thus less chromatic aberration), but also to compare the simple network I’ve built so far to the neuroanatomy of cephalopods and to implement a “smart camera” version for learning in real-time. If you found this project interesting, or have your own cool ideas mixing CNNs and animal vision, be sure to let me know @theScinder or in the comments.