Privacy arises to a major issue in distributed learning. Current approaches that do not use a trusted external authority either reduce the accuracy of the learning algorithm (e.g., by adding noise), or incur a high performance penalty. We propose a methodology for private distributed ML from light-weight cryptography (in short, PD-ML-Lite). We apply our methodology to two major ML algorithms, namely non-negative matrix factorization (NMF) and singular value decomposition (SVD). Our protocols are communication optimal, achieve the same accuracy as their non-private counterparts, and satisfy a notion of privacy—which we define—that is both intuitive and measurable. We use light cryptographic tools (multi-party secure sum and normed secure sum) to build learning algorithms rather than wrap complex learning algorithms in a heavy multi-party computation (MPC) framework. We showcase our algorithms’ utility and privacy for NMF on topic modeling and recommender systems, and for SVD on principal component regression, and low rank approximation.