SKA telescope to provide a billion PCs' worth of processing (updated)

Petaflop machines the way of the future

Glenn Wightwick, director, IBM Australia Development Laboratory, IBM A/NZ.

Glenn Wightwick, director, IBM Australia Development Laboratory, IBM A/NZ.

Two technologies currently under research by IBM may hold the key to processing and storing the exabyte (1018) of data expected to flow per day from the Square Kilometre Array (SKA) telescope project.

The company, which is part of a research consortium that includes Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO), Curtin University of Technology and the University of Western Australia, is currently working down a technology roadmap that leads to the development of exaflop machine — the processing equivalent of about a billion PCs. It is also developing a new form of solid state storage, 'Racetrack Memory', which may hold the key to enabling the storage of the SKA’s vast amount of astronomical data.

Much of the progress toward solving the massive engineering problems posed by the SKA is still at the whiteboard and analysis stage, but real progress was being made, said the director of IBM's Australia Development Laboratory for ANZ, Glenn Wightwick.

“In the last year or two IBM has built machines in the order of a petaflop and in the last couple of weeks IBM announced an ongoing partnership with the US Department of Energy to build a 20 petaflop machine by 2011-2012,” he said.

“[But] we know we need to build an exabyte machine for the SKA over the next 10 years — that’s 50 times larger and equivalent to about 1 billion PCs — so, that gives you a sense of the size of the problem.”

Despite the size of the challenge, Wightwick said IBM’s roadmaps and belief that the processor technology could be driven much further meant that it was unlikely that a billion cores would be required in 10 years’ time.

“We will need machines which probably have hundreds of thousands of processor cores in them and we roughly know how we can go about engineering it,” he said. “It wouldn’t be cost- or technically-feasible to bolt together 50 20-petaflop machines… and the power consumed would be crazy. By the time we deliver the 20 petaflop machine we will be well on the way to an exaflop machine.”

Given the need to process data streaming off the SKA telescope in real time, it was unlikely IBM or the rest of the SKA research consortium would look to utilise grid or cloud computing environments, Wightwick said.

“Moving that much data around is a challenging problem. It’s therefore likely to be a machine or machines that are scaled large enough to process the volume,” he said. “In the past you might have just thought about how many processors you need and whether you can put enough of them together.

"Today, we have to think about how many processors we can feasibly put together, how much power and cooling we can get, where the machine can be sited, how much data we can transfer. It is a really interesting challenge.”

Storage is also an equally challenging problem, Wightwick said. To address this, the SKA project was likely to use some form of stream computing to analyse data on the fly, sending useful data to storage and discarding the rest.

“There will still be a massive storage capability and we aren’t even sure what it might look like,” he said. “Disk drives are approaching 1.5TB and we will continue to see increases in drive technology, but there are a lot of interesting storage technologies coming on board (such as Racetrack Memory).

“It is very fast, unlike flash, and is denser. It is still a research project but we know — based on these projects — what it is capable of. Five or 10 years out, it may be applicable to deploy in this project.”

IBM was also working with the CSIRO on developing advanced algorithms in areas such as correlation, which facilitate the integration of high volumes of data from multiple telescopes to create more detailed astronomical images.

IBM also recently announced that it had partnered with Western Australia’s International Centre for Radio Astronomy Research (ICRAR) on the technology for the SKA.

The telescope, 50 times more sensitive than current instruments, will use approximately 3,600 antennae spread over thousands of kilometres to peer into deep space.

The SKA will capture data on the evolution of galaxies, dark matter and energy, providing insight into the origins of the universe around 13 billion years ago.

Email Computerworld or follow @computerworldau on Twitter.

Tags CSIROexaflopIBMASKAPracetrack memorypetaflopstream computingASKA

More about ANZ Banking GroupCSIROCSIROCurtin UniversityCurtin University of TechnologyGalaxyIBM AustraliaIBM AustraliaUniversity of Western Australia


Clement Dickey


I'm a bit confused. "Petaflop" usually means "petaflops", or peta-FP-operations-per-second. The proposed computer needs to process and store 1 exabyte per *day.* That isn't necessarily a 50 times larger project. The article seems to compare peta, floating ops, and seconds to exa, byte "processing and storaging", and days.

Clement Dickey


Sorry, "storaging" should read "storing."



What a waste of money. If you want to know the orgin of the the unaverse then just read the bible!



Really? Could you point me to the part of the bible that explains quantum mechanics? Or the part that simulates billions of years of life? *eye roll* Speaking as someone who believes in God (and the bible) your just too stupid to be on the internet. Please log off.



if you believe that then why are you even using devices developed from science.



This is a stunning development !!! A exaflop means that it is not only very useable for research to galaxies but also to make a map of the brain. With so much calculating power we can bring the nerves system of the brein detailed version in a virtual model. So we can create robots with similar capabilities and medical improvements and so on!



It is confusing, but they refer to both exaflops and exabytes. They will need both the speed and the data. The new array (SKA) is 50 times more sensitive than the VLA.



Haha, i chuckled a little at the bible comment.

This is a good place to mention the Blue Brain project, and how they've already managed to simulate a neocortical column with an IBM Blue Gene /L supercomputer, which is estimated at around 478 TFLOPS.

Although an exaflop would be enough to simulate the human brain, it does sound a bit too optimistic, a whole order of magnitute from the IBM Roadrunner.



i get a good lol out of comments like that



Can't wait for the data to roll in a few years from now, to prove yet again that the lying bible fanatics are wrong!



The Singularity is coming...



Why not just spend the money to help the needy!

Comments are now closed

Risks of e-voting outweigh benefits – for now