Universal Approximation Theorem Explained: Why Neural Networks Can Approximate Any Continuous Function
The Universal Approximation Theorem (UAT) gets quoted constantly, but it is usually described in a fuzzier way than it deserves. It does not say neural networks are magically good at every task. It does not say a shallow network is the most practical architecture. It does not say gradient descent will easily find the right weights. What it does say is still important: With a suitable nonlinear activation and enough hidden units, a feedforward network can approximate any continuous function on a bounded domain as closely as we want. ...
