In recent weeks, we have seen repeated attempts to suggest that Mathematics is essentially a mind game we make up as an aspect of culture. There has been a very strong resistance to the idea that there are intelligible manifestations of structure and quantity embedded in the fabric of the world (and indeed in that of any possible world). And when test cases have been put on the table, they have been consistently brushed aside as cases where our mathematical modelling has been applied; that is it’s all in our heads.
So, it is appropriate to put on the table a test case that is quite literally in our heads, hearing and particularly how the cochlea works. Video:
We see here how there is a frequency domain transformation that makes use of the mechanical properties of the inner ear. That is, our hearing moves from the time to the frequency domain, sensing pitch; also, subtle timing differences between sound arrivals at our right and left ears help us to locate sound sources in the space around us. As was noted in a comment in the Fourier thread:

_____________
KF, 4: >>On how hearing creates a frequency domain transform of sound inputs, driving the onward processing, Wiki is a handy reference:
The stapes (stirrup) ossicle bone of the middle ear transmits vibrations to the fenestra ovalis (oval window) on the outside of the cochlea, which vibrates the perilymph in the vestibular duct (upper chamber of the cochlea). The ossicles are essential for efficient coupling of sound waves into the cochlea, since the cochlea environment is a fluid–membrane system, and it takes more pressure to move sound through fluid–membrane waves than it does through air; a pressure increase is achieved by the area ratio of the tympanic membrane to the oval window, resulting in a pressure gain of about 20× from the original sound wave pressure in air. This gain is a form of impedance matching – to match the soundwave travelling through air to that travelling in the fluid–membrane system . . . .
The perilymph in the vestibular duct and the endolymph in the cochlear duct act mechanically as a single duct, being kept apart only by the very thin Reissner’s membrane. The vibrations of the endolymph in the cochlear duct displace the basilar membrane in a pattern that peaks a distance from the oval window depending upon the soundwave frequency. The organ of Corti vibrates due to outer hair cells further amplifying these vibrations. Inner hair cells are then displaced by the vibrations in the fluid, and depolarise by an influx of K+ via their tip-link-connected channels, and send their signals via neurotransmitter to the primary auditory neurons of the spiral ganglion.
The hair cells in the organ of Corti are tuned to certain sound frequencies by way of their location in the cochlea, due to the degree of stiffness in the basilar membrane.[3] This stiffness is due to, among other things, the thickness and width of the basilar membrane,[4] which along the length of the cochlea is stiffest nearest its beginning at the oval window, where the stapes introduces the vibrations coming from the eardrum. Since its stiffness is high there, it allows only high-frequency vibrations to move the basilar membrane, and thus the hair cells. The farther a wave travels towards the cochlea’s apex (the helicotrema), the less stiff the basilar membrane is; thus lower frequencies travel down the tube, and the less-stiff membrane is moved most easily by them where the reduced stiffness allows: that is, as the basilar membrane gets less and less stiff, waves slow down and it responds better to lower frequencies. In addition, in mammals, the cochlea is coiled, which has been shown to enhance low-frequency vibrations as they travel through the fluid-filled coil.[5] This spatial arrangement of sound reception is referred to as tonotopy . . . . Not only does the cochlea “receive” sound, it generates and amplifies sound when it is healthy. Where the organism needs a mechanism to hear very faint sounds, the cochlea amplifies by the reverse transduction of the OHCs, converting electrical signals back to mechanical in a positive-feedback configuration. The OHCs have a protein motor called prestin on their outer membranes; it generates additional movement that couples back to the fluid–membrane wave. This “active amplifier” is essential in the ear’s ability to amplify weak sounds.[6][7]
The active amplifier also leads to the phenomenon of soundwave vibrations being emitted from the cochlea back into the ear canal through the middle ear (otoacoustic emissions) . . . .
Otoacoustic emissions are due to a wave exiting the cochlea via the oval window, and propagating back through the middle ear to the eardrum, and out the ear canal, where it can be picked up by a microphone. Otoacoustic emissions are important in some types of tests for hearing impairment, since they are present when the cochlea is working well, and less so when it is suffering from loss of OHC activity . . . .
The coiled form of cochlea is unique to mammals. In birds and in other non-mammalian vertebrates, the compartment containing the sensory cells for hearing is occasionally also called “cochlea,” despite not being coiled up. Instead, it forms a blind-ended tube, also called the cochlear duct. This difference apparently evolved in parallel with the differences in frequency range of hearing between mammals and non-mammalian vertebrates. The superior frequency range in mammals is partly due to their unique mechanism of pre-amplification of sound by active cell-body vibrations of outer hair cells. Frequency resolution is, however, not better in mammals than in most lizards and birds, but the upper frequency limit is – sometimes much – higher. Most bird species do not hear above 4–5 kHz, the currently known maximum being ~ 11 kHz in the barn owl. Some marine mammals hear up to 200 kHz. A long coiled compartment, rather than a short and straight one, provides more space for additional octaves of hearing range, and has made possible some of the highly derived behaviors involving mammalian hearing . . .
In short, sinusoidal frequency domain decomposition of sound waves is a key mechanical phenomenon exploited by our hearing system, leading to in effect a frequency domain transformation of the temporal pattern of compressions and rarefactions that we term sound. This is of course closely related to the patterns we explored and discovered using Fourier power series and integral analysis of oscillations and transient pulses.
Where, on the mechanical side, harmonic motion is tied to elastic and inertial behaviour. Which in turn is directly connected to a rotating vector analysis — leading straight to the complex exponential analysis that draws out the full power of complex numbers, form Z = R*e^i*wt, w being circular frequency 2* pi*f (in radians per second), f the cycle per second frequency. All of this ties back to the fundamental frequency cycles and integer-multiple frequency harmonic epicycles in the OP above.
Again, Mathematical study turns out to reflect quantities, structures and linked phenomena which are embedded in the fabric of our world.
KF
PS: Notice, not a few design subtleties?
PPS: The vocal tract, in effect a wind instrument, also exploits fundamentals and harmonics to create auditory, frequency-based patterns as well as transients.>>
______________
We see here yet another case where structure and quantity are embedded in the natural world and are exploited in the design of our bodily organs; here, those for hearing. Thus, literally in our heads. END