news
10 mins read 24 Aug 2022

Listening in on Astronomical Data

A new paper in Nature Astronomy, which includes authors from Swinburne University of Technology, outlines the potential of broadening access and analysis of astronomical data, by converting it to sound.

Credit: 24-life.

When we consider astronomy, we immediately think about looking at big beautiful images of the cosmos in all its glory - from long shadows falling across ancient impact craters on the Moon, the icy rings of Saturn, the swirling gracious arms of galaxies, and even the point like quasars, at some of the most distant locations we can measure. 

Both ground and space-based telescopes (like the recently launched JWST) act as imaging time machines - bringing the wonders of our Universe into our homes and onto our devices, allowing us to marvel and explore environments and phenomena that many of us would not come across during the day-to-day grind of our lives.

Astronomy content itself is now packaged and marketed to not only tap into the part of your brain that strikes awe but also aiming solely at your emotions - we all remember the ultimate sadness of watching the Cassini spacecraft tweet its final words, as it plunged through Saturn’s atmosphere to its fiery death. Or the seeing the last fading signal of the Martian rover Opportunity, after serving its time paving the way for future Martian robots. 

On top of this, many more of us are participating in astronomy - as astronomers, data analysts, backyard citizen scientists, and astrophotographers. More people are now tapping into these science and space-inspired activities, thanks to the plunging costs of equipment and accessibility through our mobile devices. We’re even consuming more media about it - these days, there isn’t a week that goes by without a space-related event or image being shared on the nightly news.  

But an ongoing problem for astronomy is that it lies in the realm of visualisation - where our visual senses must be capable of undertaking observations to process output data for pleasure, analysis or research. When we look through a telescope, we are using our eyes. When we collect data from instruments, we need our eyes to look at computer monitors. When we process many data streams into easy-to-understand graphs and plots, we need our eyes to build and present this information. 

Now, a new paper in Nature Astronomy has outlined details of how astronomy can approach the presentation of scientific data through a multi-sensory method - including the sonification of data (a well-established, and already tested idea). Sonification is the presentation of scientific data using sound, and the enormous data sets produced across the wide range of spectrums in astronomy (electromagnetic, gravitational, etc.) are a treasure trove waiting to be tapped into. 

There are several main issues with a visualisation-only approach. Trying to see large data sets or variations in these sets is harder for the human brain to digest, as visually our brain focuses on what data to process and which to throw away. For those of us that can see, we do this all the time - for example, the screen you are reading this from is in focus, but outside of this focus region, you can see the rest of the environment around you. Our eyes are collecting data from these regions all the time, from all the angles that light can enter from, but our brains are not processing these out-of-focus regions - and so, that data is not stored or used. 

When we observe thousands of data points, as another example, popping up and down in a time series set, or subtle changes that occur sporadically in a dynamic range of big data, our eyes find it harder to try and measure/analyse these changes. This is similar to when we are watching a movie - the film is moving at a fast rate (approx. 24 frames per second) to give our brains the right amount of information that plays back the movie in ‘real-time’. Should one (of 24) of these frames fail, then our eyes and brain don’t compute the loss of data. Should many of these frames fail, then we start to notice small glitches or jumps in the movie. 

All of this so far also applies to those of us that are fully able-bodied, capable of using our eyes to most of our capacity. But we live in a world where many of us are blind or experience a wide range of vision impairment issues. Unfortunately, that means that some, or all visualised astronomy data, becomes impossible to engage with or inaccessible

And here is where sonification can step in to resolve many of these issues. By converting astronomical data into sound, we can make astronomy much more accessible to a wider range of people, and also tackle some of the issues we face when dealing with larger data sets. 

Human audio data collection and analysis (using our ears to process information) is much better at processing multiple different layers of sound, is always active, and does not require directionality (i.e., sound data can be isotropic and we can still detect it, unlike our eyes, in which we need to be looking at something). For example, when we hear a song playing in the background - our brains are able to access memory to recognise the song, as well as hear multiple components simultaneously - the beat of the song, the lyrics, that epic guitar piece. The brain is processing all these complex data streams, and the song is playing somewhere off in the distance. 

We’re also built to focus on particular audible data streams, increasing their signal to noise by cancelling out the background noise. That song playing off in the distance could easily be playing in a crowded room full of noisy, chatting people (like a busy bar on a Friday night), and yet, we are able to detect it, analyse it and access our database of memories about it. 

It is these traits and qualities, that can be applied to astronomical data sets that have been sonified that can allow more accessibility to greater audiences, as well as potentially develop the opportunities for data analysis (which can be linked to science) in a different domain.

What are the benefits and approaches

Sonification is the process by which we can hear sounds created from data collected around the Milky Way. NASA/CXC/SAO/K.ARCAND, SYSTEM SOUNDS (M. Russo/A. Santaguida).

Applying sonification to astronomical data is not a new concept. For example, rapidly rotating neutron stars known as pulsars broadcast their signals in radio frequencies (which is a form of light, with longer wavelengths than the stuff that lands into our eyes). In order to ‘see’ these signals, we use big radio telescopes, to collect the data. But once it has been collected, the signals can be digitised and converted into sound so that can now ‘hear’ each pulse, as the beams sweep past the Earth. Take a listen to a few different pulsars here

Even data from gravitational wave merger events, collected over the last few years with the giant laser interferometers, has been sonified, allowing us to hear the infamous chirp of merging black holes and neutron star systems as they let out one final gasp of energy, in their last few dying seconds. 

The authors of the recent Nature Astronomy paper note that some of the benefits of converting astronomical data to sound format is using this as a means to analyse the data sets from a science perspective - listening in for time-based patterns that our ears pick up better, hearing changes in multiple layers of simultaneous data, or even having the opportunity to be monitoring continually without interruptions (i.e., we are always actively listening, but our vision is actively interrupted with blinks). 

It also means we could make astronomy engagement much more accessible to a larger audience, who do not need the deep training that comes with interpreting visual science data. Everyone knows how to listen for patterns, changes in those patterns, and new features, so this improves the citizen science perspective. And of course, participation in sonified science data sets becomes more inclusive for people who experience blindness, vision impairment and even neurodivergent traits (such as dyslexia and autism, for example). 

Astronomical Data Already Sonified

There are a number of existing works and activities in which data has already been sonified and made available, which can be found here (a resource that is put together by the authors of the paper). A selection of these (related to astronomy) is presented below.

Matt Russo, Andrew Santaguida, and Dan Tamayo have developed a tool in which the user can dial up and down the orbital velocity of Jupiter’s four Galilean moons, with each revolution triggering a soft drum sound (as shown in their above video). By playing the video at slightly faster than normal speeds, the user can already hear the resonance of these moons, and how this resonance slowly drops out over longer time periods. This is a form of analysis - listening into data sets and noting any variations that shouldn’t be there.

In another case, a special kind of pulsar (known as a Black Widow) is spinning at 622 times per second, with an unfortunate small companion at close proximity. The intense radiation from the pulsar is slowly ablating and devouring the companion (hence the name, ‘Black Widow’ - after the spider), and as this occurs, vast quantities of shredded material is released into the tight orbit around the pulsar - which is where things get interesting. 

This debris field causes random pulses of radio light to every now and then become lensed and amplified, and so they emerge as giant pulses when compared to the regular ones. Visually, we can see this as single rising individual peaks, but when the radio light data is converted into sound, we can hear this a lot better. The sonification in this video was also created by Matt Russo, Andrew Santaguida, and Dan Tamayo.

Whilst not directly astronomy-related (but still planetary relative), this last video is the sonification of data that represents earthquakes that occur around Japan. Each magnitude on the Richter scale is given a different amplitude, and every time an earthquake event occurs, a small sound is played. What is immediately noticeable, is that this compressed time series highlights that there are thousands of earthquakes that occur at all times. 

A slightly terrifying outcome of this sonification is revealed when the massive, magnitude 9 earthquake of 11 March 2011 occurs, unleashing a torrent of buzzing sounds as the violent event ruptured and continued to output ongoing aftershocks. 

These opportunities can also take the form of games, that add value to citizen science - as is the case of the Black Hole Hunter. Here, the user is asked to listen to four different data sets, where within one of them, the gravitational wave audible ‘chirp’ is buried across the background noise and static. The user selects the correct data set, helping classify true signals from random noise data that can be discarded.