Existing image representations that support discontinuities, such as discontinuity-aware 2D neural field [Belhe et al. 2023], requires accurate 2D discontinuities as input. However, in applications such as denoising 3D renderings, not all types of discontinuities are always available. False negatives caused by sharp texture and refracted geometries lead to blurs. We introduce a novel discontinuous neural field model that jointly approximates the target image and recovers discontinuities.
Effective representation of 2D images is fundamental in digital image processing, where traditional methods like raster and vector graphics struggle with sharpness and textural complexity respectively. Current neural fields offer high-fidelity and resolution independence but require predefined meshes with known discontinuities, restricting their utility. We observe that by treating all mesh edges as potential discontinuities, we can represent the discontinuity magnitudes as continuous variables and optimize. We further introduce a novel discontinuous neural field model that jointly approximates the target image and recovers discontinuities. Through systematic evaluations, our neural field outperforms other methods that fit unknown discontinuities with discontinuous representation, exceeding Field of Junction and Boundary Attention by over 11dB in both denoising and super-resolution tasks, and achieving 3.5x smaller Chamfer distances than Mumford–Shah-based methods. It also surpasses InstantNGP with improvements of more than 5dB (denoising) and 10dB (super-resolution). Additionally, our approach shows remarkable capability in approximating complex artistic and natural images and cleaning up diffusion-generated depth maps.