‘Virtual Acoustics’ is the field of science that deals with simulating and synthesizing sound in virtual domains. The areas of application are widespread, e.g., building design, virtual entertainment and hearing research. The problem is extremely challenging because it involves simulating time-dependent wave propagation over a broad frequency spectrum in large and complex domains – ideally under real-time constraints. In our previous work, we have developed a high-fidelity massively parallel DGFEM based acoustics simulator and a method for exploring pre-computed simulation results in an audio-visual virtual reality experience for static scenes. However, the ultimate goal is to perform the simulations in real time, thus allowing for interactive and dynamic scenes in the VR. Our future research will be to explore whether physics-informed, data-driven surrogate modelling techniques can be applied to solve the problem under real-time constraints. We will pursue a combination of reduced basis techniques and efficient data-driven surrogate modeling. In such a setup, one leverages the high computational efficiency of the reduced basis model to create a large labeled data set, which serves to train the surrogate model based on Gaussian Process Regression or, alternatively, a feed forward Neural Network in a simple supervised learning approach. Our hope is that the evaluation of such surrogate models is extremely efficient and that this will provide the last step to reach the required acceleration to enable real-time or near real-time performance.