In complex room settings, machine listening systems may experience a degradation in performance due to factors like room reverberations, background noise, and unwanted sounds. Concurrently, machine vision systems can suffer from issues like visual occlusions, insufficient lighting, and background clutter. Combining audio and visual data has the potential to overcome these limitations and enhance machine perception in complex audio-visual environments. In this talk, we will first discuss the machine cocktail party problem, and the development of speech source separation algorithms for extracting individual speech sources from sound mixtures. We will then discuss selected works related to audio-visual speech separation. This encompasses the fusion of audio-visual data for speech source separation, employing techniques such as Gaussian mixture models, dictionary learning, and deep learning.