A typical scene contains many different objects that compete for neural representation due to the limited processing capacity of the visual system. At the neural level, competition among multiple objects is evidenced by the mutual suppression of their visually evoked responses. The competition among multiple objects can be biased by both bottom-up sensory-driven mechanisms (exogenous attention), such as stimulus salience, and top-down, goal-directed influences, such as selective, endogenous attention. Although the competition among multiple objects for representation is ultimately resolved within visual cortex, the source of top-down biasing signals likely derives from a distributed network of areas in frontal and parietal cortex. During the past year, we have conducted or initiated four separate studies. In the first study, fMRI data were acquired to explore the neural substrate of fine-grained awareness content and aimed to isolate it from well-known visual and attention networks. Here, the results showed that fronto-parietal and early visual areas in the human brain displayed a dichotomic pattern of activation, reflecting an ignition when participants become able to identify an image. A more gradual build-up of activation, in parallel to the gradual identification of the images presented, was present in both the fronto-parietal network and along the ventral visual pathway. Finally, the temporo-parietal junction appeared to be more active when participants are more confident of their choice (i.e., in identifying or not detecting an item). The results of this study help to clarify discrepancies in the literature by revealing the neural circuits involved in visual attention, awareness, and certainty evaluation. In the second study, we explored how one attends to anothers actions. Using psychophysics techniques and extensive analysis of videos of human reaching actions using machine learning, we showed that humans are able to predict the actions of others from subtle preparatory movements in the body. Participants were shown videos of a reaching task taken from pairs of subjects playing a competitive or cooperative game. Videos were cut at various time points and participants were told to predict the goal of the reach from early movements in each trial. Results showed that humans can read the goal of action from subtle preparatory movements present early in the movement. Similar results were obtained using a machine learning analysis. The preparatory movements showed more variability in cooperative compared to competitive contexts. However, even in the competitive context, in which participants had the incentive to conceal information from their opponent, there were ample cues that betrayed their goals. These results suggest that humans may have a biomechanical model of body movements that help them determine the future course of actions of others. These findings deepen our understanding of human action prediction ability and inform future neuroscience and neural modeling research aimed at understanding the neural underpinnings of action prediction. In the third study, magnetoencephalography (NEG) was used to explore the neural mechanisms for attention to conscious thoughts in two modalities, visual and verbal. The results showed that we can not only read from the MEG signal what the participants are thinking about (e.g., a cow or a bicycle) but also how they are thinking about it (e.g., using words or picturing scenes in their mind). This ability to decode the thinking modality was also present when participants could think of anything they wanted. These results will contribute to a better understanding of mental states and might be particularly useful for non-communicative patients, including those with locked-in syndrome. Finally, we have initiated a new project to explore the neural substrates mediating statistical learning, the process by which humans and animals attend to and extract the statistical regularities in their environment. Thus far, behavioral and fMRI data has been obtained from human participants, while they viewed and classified stimuli as belonging to one of two categories (e.g. animate and inanimate). The stimuli were grouped into different sets of either patterned or random image sequences. Preliminary analysis has revealed voxels with greater activation for random sequences relative to patterned sequences throughout visual cortex, extending from early visual areas to the ventral temporal lobe. These results suggest that visual statistical learning of patterns follows predictive coding models in which there is an attenuation of the responses for predicted patterns in the visual cortex. To further elucidate our knowledge of visual statistical learning and to understand how different brain regions communicate with each other, in future work we will be conducting simultaneous electrophysiological recordings in multiple brain regions in non-human primates.