Optimal approximation of continuous functions by very deep ReLU networks.
We prove that deep ReLU neural networks with conventional fully-connected architectures with $W$ weights can approximate continuous $\nu$-variate functions $f$ with uniform error not exceeding $a_\nu\omega_f(c_\nu W^{-2/\nu}),$ where $\omega_f$ is the modulus of continuity of $f$ and $a_\nu, c_\nu$ are some $\nu$-dependent constants. This bound is tight. Our construction is inherently deep and nonlinear: the obtained approximation rate cannot be achieved by networks with fewer than $\Omega(W/\ln W)$ layers or by networks with weights continuously depending on $f$.
Publisher URL: http://arxiv.org/abs/1802.03620
DOI: arXiv:1802.03620v1
Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.
Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.