You’d have to be living under a rock if you haven’t witnessed a child throwing a temper tantrum in a public place. Most times we just frown, strangers blame the parenting, parents blame the friend circle and everybody blames the internet.

To some extent, we believe all these factors influence how we perceive scenarios and build emotional responses. However, who do we blame when entities, not exposed to any of these influences, behave erratically? It's true! Our new hyper fixation, AI, has been diagnosed with top-grade stubbornness. Research shows that neural networks, serving as the backbone of any AI system, are typically biased towards learning less complex functions. In other words, neural networks want to learn the low detailed representation of data instead of being meticulous.[1]

High Frequency vs. Low Frequency Functions

In neural networks, low-frequency and high-frequency functions signify different patterns or variations in the data.

Low-frequency functions capture global, large-scale patterns, broad features, or overall structures in the data. They represent coarse details and exhibit slower variations.

Conversely, high-frequency functions represent fine or local features, capturing quick variations and small-scale details such as edges and textures in images.

The Problem?

At first glance, it may seem beneficial to train any model on high-frequency data because it offers finer details, potentially improving the model's learning. However, in practice, it is observed that neural networks tend to learn more effectively and quickly from low-frequency data. Directly inputting high-frequency data often results in prolonged training times and yields unsatisfactory results. The tendency of the neural network to learn effectively from low frequencies while struggling to converge for high frequencies is known as spectral bias.

MLP couldn’t learn a single image with fine details after a significant number of iterations even when the number of weights exceeded the number of pixels in the image. [2]MLP couldn’t learn a single image with fine details after a significant number of iterations even when the number of weights exceeded the number of pixels in the image. [2]

However, spectral bias isn't entirely negative. Some researchers credit the generalization ability of neural networks to this phenomenon. They observed that robust networks with good generalization properties performing better on new, unseen data tend to be biased towards processing low frequencies in images. [3]

Fourier Networks to the rescue!

In an ideal world, we would want our model to learn faster, capture fine details, and predict accurately for a wide variety of use cases. But in the real world we know that the neural network on its own isn’t able to adapt to local features optimally and needs a diverse dataset to improve generalization. To save on computational resources and time, researchers have used Fourier feature mapping that enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. This enables MLPs to represent complex 3D objects and scenes[2].

At Siml.ai, we've integrated a neural network node into our visual editor that includes pre-implemented Fourier transforms to expedite the setup and training process. You can select from a wide array of architectures provided by the NVIDIA Modulus framework with just a click of a button. No advanced math equations are required!

We find that Artificial Intelligence displays a human-like trait by preferring a simpler solution and not learning high-frequency details as quickly as intended. However, once the input signal was adjusted using a simple Fourier transform, the model rapidly and accurately converged. Therefore, tools like Siml, which incorporate this effective solution, should be used when dealing with stubborn networks and complex data!

References:

[1] Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F. A., Bengio, Y., & Courville, A. (2018, June 22). On the Spectral Bias of Neural Networks. arXiv.org. https://arxiv.org/abs/1806.08734

[2] Tancik, M., Srinivasan, P. P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J. T., & Ng, R. (2020, June 18). Fourier features let networks learn high frequency functions in low dimensional domains. arXiv.org. https://arxiv.org/abs/2006.10739

[3] Karantzas, N., Besier, E., Caro, J. O., Pitkow, X., Tolias, A. S., Patel, A. B., & Anselmi, F. (2022, March 16). Understanding robustness and generalization of artificial neural networks through Fourier masks. arXiv.org. https://arxiv.org/abs/2203.08822v1