Examining space isn’t easy – there’s an awful lot of it. That’s why Dr Ingo Waldmann, a post-doctoral research associate at University College London, turned to computers for help in identifying the make-up of atmosphere around exoplanets. But what’s relatively easy for humans to do is difficult for machines: we’re simply better at pattern recognition than an unthinking algorithm could ever be. The issue is that it takes time to do it.
For help, Dr Waldmann turned to machine learning, creating software called Robotic Exoplanet Recognition (RobERt, for short), which uses a neural network – artificial intelligence modelled on human brains. Dr Waldmann then trained RobERt with a database of 80,000 pieces of data to identify water, methane and other key spectra in the atmosphere around exoplanets.
RobERt boasts a 99.7% accuracy rate, but it can also “dream”: ask the robot to “imagine” water, and it approximates what that spectrum would look like based on its own experience. We spoke to Dr Waldmann to find out why neural networks are the right technology to eyeball space, and to find out what else RobERt might discover.
What problem were you trying to solve with RobERt?
What we work on is characterising atmosphere spectra on the planets. When the planet is in front of a star, there’s some light shining through the atmosphere and we can analyse that light for molecular abundances and trace gases such as methane, water, carbon monoxide and so on.
We can also look at the emitted light, which is when the planet goes behind the star. But then we lose what the planet radiates from its surface and the atmosphere. And analysing that is a lot harder and requires far more time.
So we came up with the idea to have a neural network that learns. As human beings, we can very easily detect patterns in these spectra – but for an algorithm, that’s difficult.
How does the neural network work?
The neural network is a deep-belief network [it has multiple layers that can communicate with each other]. It’s relatively simple in that sense. We gave it three layers of neurons – three extraction layers – and then basically we trained it.
How do you train a neural network?
We gave it 80,000 spectra that we generated with our theoretical model and presented that to the neural network. At the beginning, we trained it blindly. We gave it spectra and it tried to optimise itself internally. And then there’s a second stage where you present a pure water spectrum or a methane spectrum. Then we tell it that is pure methane, so [it can] attach a label to what it already learnt.
What’s the “dream mode”?
One way to check the efficiency of the neural network is if you can turn it upside down. So instead of presenting it with some data and it tells you what is in that data, you can tell it to imagine how a dataset would look given some components that you’d want to be in there.
You give it a water or methane label and then turn the neural network upside down. It will then imagine how an exoplanet spectrum of an atmosphere will look like. And interestingly enough, it’s very close to the real thing.
What is RobERt looking for?
At the moment, we have no idea how our solar system formed and evolved, because it’s quite hard if you just have that one example to go by. So we’ll be able to look at lots of other systems very quickly and efficiently with tools such as RobERt and put our formation history into a context.
So not any of the aliens the tabloids are talking about then?
That’s funny, because the original press release had absolutely nothing to do with aliens. Even with next-generation telescopes we won’t have the sensitivity to find aliens. It’s very unlikely, but [RobERt) will find any signatures of biomarkers such as ozone. It might, but that’s still very unlikely.
Dr Ingo Waldmann is a research associate in the department of physics and astronomy at UCL
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.