NBN 101: Floating the submarine cable question

We take a deep look at the issue of international links to the internet and what this means for broadband investment in Australia

This article is part of Computerworld Australia's NBN 101 series, in which we take a look at the arguments surrounding the fibre-to-the-home (FTTH) network, and dissect them one by one. The articles are meant to be an overview of the debates central to the National Broadband Network (NBN) and other broadband infrastructure projects to give you a grounding as more and more media outlets and commentators speak out on the project. We encourage people to take the discussion further in the comments section.

In our first article we took a look at how Australia’s NBN plan compares to the rest of the world and the statistics and graphs from the OECD, and then we strapped in for a tour of speeds. We also had a look at wireless technologies versus fibre optic, then we delved into the economic argument for a high-speed national broadband network, and how applications and potential service packages may play a role in the NBN. Then more recently we discussed whether mobility is a friend of foe.

Now we turn out attention to our international links to the internet, as it is one of the topics that surprisingly popped up during the Federal Election.

Bedding down the submarine cable links

One of the conversations emanating out of the Federal Election campaign broadband discourse centred on our international links to the internet.

In short, the argument went that mass infrastructure investment in projects like the National Broadband Network (NBN), no matter how fast, would be ultimately bottlenecked by Australia's international links. Support for this argument has some convincing elements: 70 per cent of the content Australians access is based overseas and the submarine cable links connecting us to the rest of the digital world simply aren't abundant.

To build a network of the NBN's scale without factoring in additional international links would relegate Australians to a proverbial pipe dream (excuse the pun).

Throughout the election campaign several commentators used this reasoning to various ends. Pro-NBNers, on the other hand, conjectured with the notion that Australian internet access would become much more local, as increased bandwidth afforded greater benefits for internal communication and applications.

However, most really didn’t look beyond a very shallow interpretation, and in many ways it is fundamentally erroneous.

So let’s look a bit deeper.

Aside from the fact the US is a highly successful mass content producer and has for some time put in a lot of national effort into bolstering the IT industry, one of the reasons so much of the internet content Australians access is located offshore is the manner in which our networks are architected.

IDC analyst David Cannon explains that when websites were mainly static content, it didn’t matter if updates to information took up to two or three days to complete. Basically, around the late 90s and even early this century, a lot of this content was cached domestically, predominantly with hosting company, Melbourne IT.

At the time, ISPs bought data from the big carriers – Telstra, Optus, AAPT, and WorldCom (now Verizon) – in two strands: Domestic and international.

“All of it travelled back and forth locally at a cheap price,” Cannon said. “Then all of a sudden websites started to become more dynamic with multimedia capabilities. What was happening was the cached data wasn’t keeping up with what the websites were trying to achieve. Simultaneously there was the Southern Cross pipes coming onboard, making data far more accessible and much cheaper.

“What they were hoping to do was sell clear channel pipes to the ISPs who could go to LA and peer with basically the internet and negotiate data rates themselves, and the telco would just sell them the pipe. That didn’t work out because they were really asking an arm and a leg. I remember selling the first STM-1 Southern Cross pipe to connect.com, which was AAPT’s ISP.

“I think it was something like $US12,000 a month or something like that just for a 2 megabit per second (Mbps) pipe. Back then that was good capacity, but of course now that is just ridiculous. Then if you wanted to move to STM-1 or anything like that you were talking about hundreds of thousands of dollars US per month. It was just out of reach.”

To cut a long story short, things evolved - prices came down, broadband emerged and the dynamic content on websites meant it was better to go direct to the US for data instead of paying to cache it domestically. And we have lived with that architecture for the past 10 years.

So do we have a capacity bottleneck to access this data? Not even close.

As far back as early last year Robin Russell, CEO of the Australia-Japan Cable, wrote in an article that international networks are nowhere near being considered a capacity constraint.

“That proposition can be despatched immediately,” he wrote. “Each of the four networks that will be providing the bulk of international connections for Australia is capable of carrying at least a terabit per second of data. The total international capacity in use for the Australian market in 2009 is estimated to be around 300 gigabits per second. Accordingly, total capacity usage could double, then double again, then double again, and then double yet again before the capabilities of those networks was exhausted. It would therefore be difficult to say that international networks are a capacity bottleneck in the Australian market.”

The four cables he was referring to are:

  • Southern Cross Cable Network
  • Australia-Japan Cable
  • Telstra’s Endeavour
  • Pipe Network’s PPC-1

There are other submarine cables but these are the four major transit routes for most of our internet traffic.

(See the images at the top of this story for the cable routes and maps.)

Moreover, there are other cable projects in the works. In July, it was announced raw network bandwidth out of Australia is set to get a two-fold increase with a $US400 million undersea cable.

Data carriers Pacnet and Pacific Fibre are partnering to build the Pacific Fibre cable, a low-latency undersea fibre optic cable spanning Australia, New Zealand and the US.

The bandwidth of the new cable will be a minimum of two fibre pairs with 64 wavelengths per pair. Each wavelength has a throughput capacity of 40 gigabits per second (Gbps) for a total of 5.21 terabits per second (Tbps) bandwidth.

The cable length is estimated to be 13,600 km long and will connect Sydney, Auckland and Los Angeles, bypassing the likes of Guam and Hawaii for the time. The pipe can be upgraded to 12Tbps with 100G technology.

And in July, many analysts and observers backed the announced upgrade to the Asia Pacific Cable Network 2 (APCN2) from 10 Gigabits per second (Gbps) to 40 Gbps.

So capacity is really not even close to being an issue at this stage. Will we need more in future? Most likely, yes, particularly if NBN Co makes good on promises to deliver peak speeds of 1Gbps. But the existing cables can be upgraded by swapping out the terminal equipment to increase the already abundant capacity.

Yet, although we don’t have a bandwidth bottleneck by any stretch of the imagination, it doesn’t mean you will get top speeds from content that is located offshore.

Layer 10’s Dr Paul Brooks explains.

“Because of the round trip delay, you are not going to get 100Mbps of download from an international server regardless of how much capacity is sitting there unused,” Brooks said. “If you have a server close by, yes you can get your average PC can get 20, 30 or 40MBps from that. But with exactly the same download even if you had infinite capacity would still give you only 6 to 10Mbps from the international server because of the round trip delay.”

So even if we build the NBN we aren’t going to get the speeds they promised anyway, so what’s the point in spending the money?

Well, there are a number of reasons why this conclusion is acutely short-sighted – not least that we won’t be using broadband infrastructure just to access internet content that comes from the US - and it shouldn’t be used to preclude building top-class telecommunications infrastructure, even if it isn’t the NBN as we know it under the Labor administration.

Next: The domestic caching trend

Tags submarine cablesNBN 101National Broadband Network (NBN)broadband

More about AAPTAAPTAkamai TechnologiesAmazon Web ServicesAPTCisco Systems AustraliaetworkIBRSIDC AustraliaMelbourne ITMicrosoftOECDOptusPacnet AustraliaSouthern Cross CableSTMTeleGeographyTelstra CorporationVerizonVerizonWorldCom




Congratulations, the best most informitive article on NBN so far!

Balanced and reasoned,



It was also very informative...oop's



And who cares what a minion to the Liberal party, who has not had a single thought of his own, since 1955, really thinks?



At last ... less hyperbole for self interest, and finally an analysis with reason. It continues to amaze me how the old international cost horse keeps getting trotted out whilethe ULL cost elephant sits in the corner. Well done.

D Newman


At last decent map/info release to counter the fallacy of the overseas "bottlenecks", was getting tried of countering that arguement all the time with just my word alone .....

At last data is being released about the NBN, looking forward to the costings report next week(hopefully made public quickly), going to be like throwing cat nip into a room full of bored cats.(this forum).



This is the stuff that our Polies should read..



I disagree. Fibre to the door so we can do more of the same is pointless. Faster porn & fewer trips to video-ezy would drive the need you describe.
But High-Speed internet could encourage more companies to let their employees telecommute (work from home). Remember 15% drop in traffic = 40% better flow (as you see in school holidays)
High-Speed Internet to regional areas would permit these telecommuters to live away from out major cities. Bringing new life to shrinking rural areas.
Online meetings & collaboration, video conferences & online presense all benefit from high reliable bandwidth & don't have to be hosted O/S. Cloud Data centres could be hosted in Australia.

I agree to create a balanced net the O/S pipes do need to be factored in. But my point is O/Seas is not everything. Especially if our ISP's become more price competitive.



Thanks alot, with that said:
[[“Because of the round trip delay, you are not going to get 100Mbps of download from an international server regardless of how much capacity is sitting there unused,” Brooks said. “If you have a server close by, yes you can get your average PC can get 20, 30 or 40MBps from that. But with exactly the same download even if you had infinite capacity would still give you only 6 to 10Mbps from the international server because of the round trip delay.”]]

That is for single threaded downloads, you can simply use 100 threads and get 1gb/s from a server on the other side of the planet assuming it has the backhaul to support it.



Despite the high RTT one can achieve much higher throughput if the server that US end is using a better TCP congestion control algorithm such as hybla. That is, an algorithm more suited to high latency satellite links. A smart ISP could also provide transparent throughput improving boxes at either end. But Australia lacks smart ISPs.



And gav: if you are view HD flash, for example, then you simply cannot open up 100 "threads" (I think you really mean 100 parallel connections). Parallel connections are only really usable in very special circumstances and they aren't already easy to achieve.



Nah I meant threads, you can also use certain download managers to 'speed up' the viewing of content on Youtube and such, as you can extract the location of the .flv file and view it locally once it downloads quicker, but it is very easy to speed up HTTP / FTP downloads and whatnot with multiple threads.

I reguarly pull 18Mbit (2.1MB/s) from servers on the other side of the planet with multiple threads.



To gav: you can easily set up a (mostly transparent) proxy on a vps in the usa and install the appropriate improved tcp congestion control and then you don't need anything special to achieve much higher speeds. You don't even need to locate the .flv and fiddle with multiple connections (your use of the term threads is totally wrong because you can get multiple connections with just a single thread). And the bonus is you get full access to Hulu and more.

Another Telco Analyst


The first to light capacity on the Southern Cross cable was Comindico, not AAPT. There is a photo of James Spencely plugging the first international wave into their router in the US.

Whilst most small-to-medium providers purchased capacity off one of the Gang of Four, most large providers (TPG for example) purchased Pan-Am-Sat capacity. This continued as late as 2003, with wholesale ISP Veridas continuing to do so for a brief period of time. Most providers with independant connectivity to the US would purchase domestic-only connectivity from the GoF.

Melbourne IT was (and always have been) a domain registrar and web hosting company (although they have now added managed services to their portfolio.. What David Cannon is alluding do is cache sharing, which was performed between major ISPs who wanted to share the contents of their proxy caches locally, to avoid having to haul data from the US -> AU which had already been transited by another ISP. As far as I am aware, cache handling was never something that MIT offered as a product (although there's a strong chance that they would have had private peering relationships with most ISPs)

Oh, the times when Frame Relay was a legitimate backhaul methodology....

David Cannon


@Another Analyst,

As you would know there can be subtle differences in what was said versus what is writen when doing media interviews. So just to clarify, AAPT's first SC circuit was sold to Connect.com. The point was to provide awareness on the history of international backhaul, how it was sold to ISP's and the costs.

The assumption on MIT is correct. They hosted all .au domains hence the piering allowed for domestic traffic charging. Again, the point here was to provide insight on how ISP's managed data.

Perhaps you could contribute to the article by providing some insight on the shift away from chaching and your thoughts on the need to better manage high bandwidth consuming applications?

@ Trevor Clarke,

Great article. I'm forecasting you will do well as an indistry analyst in the not too distant future. :-)

Cheers, David



Thanks for this article - very informative, but missing some 'local' perspective. I get utilities services - like water, electricity and gas and it doesn't matter how the infrastructure is beyond my property, I get the same as everyone. No so with my broadband. I live approx. 4.5km from our Telstra exchange, and thus I have 4.5km of copper to slow down our broadband speed (a LOT). My understanding is that a full NBN would replace that entirely with 4.5km of fibre from the exchange to my home (via nodes and other devices in between). Great in theory, but I doubt will ever happen, or I won't be able to afford it. The alternative I thought from the Coalition is to have fibre from the exchange to the node (at the top of my street), then copper the last 200 metres - at a fraction of the cost of the NBN. I could certainly live with the coalition's option, as my broadband speed would be DRAMATICALLY INCREASED, for a fraction of the cost of a full NBN. Could you discuss this part of the NBN and discuss how or if this would make a dramatic speed increase, without skyrocketing the cost of broadband? I've also read for the NBN will have to connect an average 4,000 homes EVERY DAY for the next 8 years! Is this realistic, and how old will the technology be on the first installations after the last one is installed?

D Newman


@MichealID seriously its fibre what are you expecting to replace it with, faster than light pigeons.
Fibre is the road, the ends of which can be upgraded as and when is needed, in essence its good for a very very long time, the life of the fibre cable is 60 years plus, and would expect it to still be in use for its life.



@16 Newman would you guarantee the splicing for 60 years?

What are your comments on the LTE network recently built in Sweden, and the same network being built in the States given 100% coverage for $7 billion.
Makes NBN a bit expensive!



@15 MichaelD, Telstra estimated 20,000 electrified and DSLAM-equipped nodes would be required to deliver OPEL ADSL. That's a very environmentally-unfriendly and operationally expensive deployment, and locks in ADSL2+ as a permanent speed ceiling. Most of the speed is lost after 1 or 2 km, which is absurd in our vast land. And you still need fibre from exchange to each node, so why not do drops at the premises along the way and be done with it?

@17 Raymond, the splicing appears to be more robust than you might expect, as a lot of early fibre (which has been in use 40 years) is still working. Materials and techniques have also improved a lot over that time. Even when there is a break, you can locate it electronically and go straight to the location to fix it, unlike binary searching for a break in a copper trunk.



@MichaelD what makes you think it's going to be so unattainably expensive?

The example you gave in regard to water and electricity is exactly what the internet in Australia is going to become - standard. Everyone will have it, everyone will enjoy it and there will be a lot of competition from the ISP's to provide competitive pricing for it.

Additionally, there is literally no other replacement for this type of cable. Short of some amazing star trek subspace communications technology, this will be here for a very very long time.

Now, onto the whole international links topic. As it was made quite clear the current international links are really not even close to capacity, why bother with ANY more (short of links specific for latency advantages)?

There should be incentives given to promote the use of the internal Australian fibre network. Instead of limiting standard upload speed connections on fibre to 1mbit or something utterly pathetic, they really need to consider following europe and give upload connections upward of 50mbit (which honestly, I can see happening) along with free data transfer between users on the NBN.

Can you imagine the result of "NBN only" torrent websites/etc with all users having unlimited data transfer and civilised upload speeds? I'll give you a hint: reduced traffic on underwater cables. Just think about free PIPE traffic many ISPs used to offer.. online radios, torrent websites, entire communities based on this.

Leave it to the users to 'cache' the information and share it all between each other internally. If you build it they will come. Anyone can see it's the intelligent and efficient thing to do, the only people whose pockets might be hurt a little are those who are building these hundred million dollar international links.. and we cant have that, can we?



@Raymond: Apparently Sweden's LTE network delivers only 12MBit in practice:


Another point is that in addition to wireless, Sweden also has cable, ADSL and fibre. If we were to rely ONLY on LTE for the NBN, how would congestion further erode speeds and quality? Each base station may need to share bandwidth with hundreds or even thousands of subscribers.

I'm not convinced it's a good idea to spend $7 billion of public money on a network that might perform worse than ADSL.



Kudos Joe, but there really is no point in trying to correspond rationally with Raymond.

Because Raymond has shown so many times that he is incapable of comprehending anything outside of his narrow band of sub-intelligence.

But the rest of us thank you for your input!



Isn't there an inconsistency in your arguments here in that on the one hand you argue the private sector is able to provide the market all the international capacity it requires at no expense to the taxpayer, but on the other hand a massive taxpayer investment is required to provide broadband to the individual consumer?

Not against the NBN per-se, especially for the delivery of services to rural areas, however not convinced that the government needs to get involved in metropolitan areas. As always with government spending there are opportunity costs, and its only natural for the IT sector to defend its territory, but we should always be mindful of the limited means and unlimited wants in society at large.
At the end of the day everyone likes to believe in a free lunch.

Comments are now closed

Hellmann Australia dumps SAN

MORE IN Business