A two stage neural optimization approach as an alternative to conventional and randomized SVD techniques, where the memory requirement depends explicitly on the feature dimension and desired rank, independent of the sample size. The network minimization problem converges to a low rank approximation with high precision. Our architecture is fully interpretable where all the network outputs and weights have a specific meaning.
A low cost, two-stage, hybrid neural Pareto optimization approach is accurate and scales with data dimensions,and number of functions and constraints. The neural network efficiently extracts a weak Pareto front, using Fritz-John conditions as the discriminator. The second stage is a low-cost, Pareto filter to extract the strong Pareto optimal subset. Fritz-John conditions provide us with theoretical bounds on approximation error.