Our performance in recognizing and identifying the source in sound perception varies according to the content of the sound. Sound waves caused by speech, music and natural sounds are identified and processed according to frequency, magnitude and phase characteristics when transmitted to the brain and sent to related regions to execute functions such as motor, attention and decision making. The perception of sound using different channels may be processed according to the semantic or physical characteristics of the sound, depending on the type and content of the information we acquire. In this study, the mechanism of event-related sounds that we are exposed to in daily life and guide our movements through passive attention processes will be studied.
We often identify the sounds we hear at certain noise ratios based on our previous experience. However, when some characteristics of the sound are obscured, it may be impossible to identify. In this study, we will examine how the perception process of sound events with cause-effect relationship is processed depending on the temporal sequence and type of event. In cases where the temporal order of the sound is distorted at different rates, local or global distortion may occur by maintaining consecutive information. Our findings might shed light on interpreting natural sounds and could be of help for implementing cognitive computational models for robotic applications and hearing implants.