How to identify the informative dimensions of large-scale neural data is an open research problem. Neural activity carries information across both time (temporal variations in neural responses) and space (differences in the activity of different neurons or brain regions). Here we review a family of analytical methods, termed space-by-time tensor decompositions, which can elucidate how the spatial and temporal dimensions of neural activity interact in order to form robust representations of neural activity in single trials. We present a set of algorithms based on non-negative matrix factorization that implement the space-by-time tensor decomposition and discuss their properties and applicability to different types of neural signals. We then propose a set of measures that can be used to assess the power of tensor decompositions and quantify their effectiveness in capturing neural information. We conclude with a demonstration of the space-by-time decomposition of real neural population spike train data.