What can we learn from a fish brain?

What can we learn from a fish brain?


Questions related to memory, consciousness or attention always provoke my thoughts. When I solve a problem, I always start small so why not study brain on a smaller scale? So I joined Dr. Julie Semmelhack’s lab on March 2018, we used a baby fish brain as our model system. Its transparency offers a huge advantage; it allows us to put some calcium sensor in the brain, shine light, and detect it to see what is going on the whole-brain in cellular resolution. All of these while the animal is behaving. I am working on 4 different questions (attention, form from motion, visual pop-out, and prey-capture process), and let me start with attention.

(An important note, mainly for someone not familiar in this field: it sounds simple, and a perfect system when I described it but there are certainly issues and assumptions, on the experiments that we need to be aware of such as the effect of calcium sensor on the animal, it is not a direct way of measuring the brain activity, the cells in the brain can only communicate through so called “spikes” or “electrical signals”, you need to balance the temporal, and spatial resolution when we image the brain to find the right combination for your questions, so on and so forth. The point is there are issues and assumptions, that we scientists, just like in other model systems or field, that we should always keep in our mind whenever we do our experiments, and interpret our data. Anyway, these are just my 2 cents; let me continue my story-telling about my journey.)

The specific enigmatic question I started seeking answer on this fantastic system is how does the larval zebrafish brain allocate, and deploy resources given multiple competing preys in its visual environment? For now, I will call this a selective attention. And here is an excellent review paper about it by Eric Knudsen from Stanford University.

First task I need to do is to find a behavior paradigm that will be my foundation to tackle the selective attention in the fish context. And just like any other animal do to survive, they eat, and they hunt. Thus, I used the prey capture behavior as my starting point. A number of interesting studies had already described this behavior (see Bianco, et al.(2011)Semmelhack, et al. (2014)Bianco, et al. (2015) and Mearns, et al. (2019))


a)  b) c)

(Figure 1) A description of the experimental methods I setup: (a,b) The fish is embedded on an agarose, and we free its tail and eyes. We used a commercially available projectors to project a visual stimulus in front of the fish. Then we record and track the eye and tail kinematics of the fish using a high-speed camera system. Since we cannot directly ask the fish what it’s doing, we infer from their kinematics what is it they want to do.

The basic idea is when we project a single moving dot in front of the fish, it will do its usual hunting routine (the eye converges, and there is a characteristic bending of its tail); hence, the fish thinks it is its prey and tries to pursue it (Fig. 1a, 1b). Instead of presenting just 1 dot; I simply add another dot so there will be 2 “preys” in front of it, and moving symmetrically in the visual space. Isn’t a straightforward idea? To give illustrate how a trial looks like, watch (Fig. 2a)

a) b)

(Figure 2) Double prey assay: (a) A behavior recording and the stimuli used in the assay. (b) The probability rate of the fish selecting 2 ° compared to different sizes of competing prey. The probability is higher if the competing prey is larger (less salient as a prey), and lower if the competing prey is smaller (comparable saliency).

In this double prey assay, I fix the size of one prey (2 °) and compared the fish selection with different sizes of competing preys (3 °, 4 °, 6 °). When it is 3 °, the fish got more “confused” because of the competitor has a comparable saliency. Their prey are usually small (e.g. paramecia), and perhaps that is why the smaller dot size in this experiment the higher probability it will select compared to the larger one. In this context, they select only one prey at a time therefore they must “ignore” the competing prey in order to focus on their target prey during hunting sequence. How did they do it? My simple version of hypothesis is that they must encode the saliencies of the 2 stimuli, and compare them through some sort of feedback global inhibition and focal enhancement. I got inspired by the studies done in birds, which showed this phenomena, specifically in the optic tectum , and the nucleus isthmi (superior colliculus, and parabigeminal nucleus in mammals, respectively); however, it was done in anesthetized animal, and few neuronal recording (you can find more info in the review I mentioned). I investigated this in a behaving animal while looking at a large neuronal population.

Here are examples of functional imaging I have in these experiments. The yellow circle represents the prey capture initiation, and the prey location (right or left).

(Based on my experience on this field, I think building the behavioral paradigm is important in capturing the dynamics of specific neural circuits of the behavior; that is the reason I developed a real-time analysis of fish kinematics to make a closed-loop feedback system for virtual reality paradigms. In addition to these, I utilized my knowledge in micro-controller, and robotics on building a systematic low-cost behavior setups, and also developed customized offline computational analysis of the fish behavior, it’s brain activity, and track the fish prey’s behavior as well.