To boldly go where no internet has gone before
One of the founding fathers of the internet is now working on bringing advanced communications to space exploration
October 30, 2007
Vint Cerf, one of the founding fathers of the internet, hopes to have space-based testing of an advanced communications system running by 2010.
Vint Cerf is often referred to as the founding father of the internet, but the man himself is humble about his role in its development. His invention of the Transmission Control Protocol/Internet Protocol (TCP/IP) while working for the U.S. Department of Defence in the 1970s helped create the backbone of the internet, but Cerf generally likes to share the credit with the hundreds of other scientists and engineers who made it happen.
It's a mindset he brings to his current project: the logical extension and expansion of the internet beyond Earth and into space. Aside from his day job as a vice-president and "chief internet evangelist" of Google Inc., Cerf is spearheading the development of an intergalactic internet. He discussed the project with CBC News.
Can you tell us about the intergalactic internet? It sounds like something out of a science-fiction movie.
It does, depending on who you talk to. Either it sounds like science fiction and they think you've lost your mind, or they think you're proposing to spend hundreds of billions of dollars to build this giant interplanetary network and hope somebody uses it. Neither of those is correct.
So far, the expenditures have been de minimis as these things go because it's basically all protocol standardization and some software design. No hardware has really been required other than using conventionally available stuff.
The effort got started in 1998 in the springtime. It revolved around a belief that if we were to look 50 to 100 years into the future, we might see some substantial amount of space-based communications requirements emerging, either because we have a large number of robotic probes in operation around the solar system, and sensor networks and things of that kind on the surfaces of the planets or their moons, or, and it might be and, there might be manned exploration under way.
As time went on and you had an increasing number of robots and possibly manned missions, the kind of rich networking functionality that we get out of the internet today might be needed in order to service all those different possible communicating systems.
Of course we don't expect it will be anything like the scale we see on Earth, but we were curious to know how we would go about creating a richer networking environment, because historically the communications for space-based events and projects has been point-to-point links. Sometimes radio-relayed links [are used], where the signal goes up on one frequency and it goes through an intermediate point where the frequencies get changed and it gets transmitted further on, but it's literally like a bent pipe, which is the term that's sometimes used.
It's not like a store-and-forward router or anything like what we use in the internet today. It's essentially a point-to-point radio link. For the most part, that's worked out pretty well, and I don't want to suggest that this is motivated by the belief that these systems weren't working or useless or anything. However, as you get a larger and larger number of missions running concurrently, you run into some problems because the system that is being used to communicate on an interplanetary basis is usually the deep-space network, of which there are three main antennas — the big 70-metre dishes — in Goldstone, Calif., Madrid, Spain, and Canberra, Australia. There are some additional, smaller 34-metre dishes as those sites as well, so there is multiple antennae potential, but still, not a very large number of possible concurrent communications can take place.
So we said, let's take as a theoretical problem trying to create a rich networking environment so that you could have an unlimited number of termination points, sensor systems, mobile operations like the rovers, things in orbit around planets that are sending data back, and maybe even situations where there are multiple devices at the planet and need to intercommunicate with each other without having to go back to Earth. This is almost the case now with Mars, where we've got four orbiters and two live rovers with Mars Science Laboratory on its way. We can imagine, particularly in the context of a manned mission, that you might have a laboratory on the landed vehicle and a bunch of rovers or fixed sensor systems that you need to interact with locally, and possibly relay information from one sensor group to another through a satellite that might be in synchronous orbit.
We asked ourselves, "Can we use standard TCP/IP? Can we literally just extend the internet to operate across the solar system, and we're done?" Of course, very quickly we discovered that didn't work. Part of the reason is that the round-trip times are so long. The TCP/IP protocols are not designed to work with 40-minute round-trip times, which is what you get between Earth and Mars when we're farthest apart in our orbits, and it only gets worse as you get to the outer planets.
There's also the disruption component because the planets are rotating and things on the surface disappear from view and you can't communicate with them until they come back in view, unless of course you can relay through a satellite, for example.
So we evolved a new form of networking protocol that we call "delay- and disruption-tolerant networking." The protocol is called the "bundled" protocol because the objects that are sent and received are called bundles. We just picked a word that was different from "packet," but it's a packetized architecture that is very similar in spirit to the internet except that we used different kinds of protocols to deal with the delay and disruptions that are evident especially in the interplanetary environment.
The protocol design is pretty much done. We've gone through four iterations of it. We have released free, open-source software, so it's available for anyone who's interested. There's a website called www.dtnrg.org.
When you say "we," you are talking about…
I'm sorry, this is not Google, this is [NASA's] Jet Propulsion Laboratory. For the purposes of this discussion, I'm reporting to you as what they call a "distinguished visiting scientist," which is better than "extinguished visiting scientist," at the Jet Propulsion Lab. I've served there as a visiting scientist and I'm on the advisory board to the director of JPL. Those are two different roles, but "we" in that case is me plus my group at JPL.
We also have a small group at Mitre Corp., which participates as well, and we have an advisory group that draws on a number of universities around the country, including UCLA, MIT, Carnegie Mellon and University of Delaware and a few others.
The project initially had funding from [the U.S. Defence Advanced Research Projects Agency], the same organization that funded the internet, to do the basic architecture. Later, we got additional funding from DARPA to take what we had learned in this bundle protocol for delay-interception-tolerant networking and apply it to tactical military communication. We realized in the potentially volatile high-risk environment of tactical military communications that these protocols might be more robust than the traditional TCP/IP. We demonstrated that fact in field exercises with the Marine Corps.
At this point, we think we have a pretty solid set of protocols. They've been demonstrated terrestrially to be robust in the face of significant impairment. Now what we're about is getting these things space-qualified.
NASA has a practice of rating software or technology, it could be hardware, with what are called technology readiness levels. They go from one to 10. We are trying to go from demonstrating the prototype at a TRL of five to the point where this is a deployable technology, which is a TRL of seven or eight. We're shooting at trying to get that to where it is deployable in a live mission by 2010.
As always, NASA is under budget pressure, especially because of the shutdown of the shuttle and the creation of the CEV Constellation project and maintenance and completion of the International Space Station. All of these things are contending for funding for robotic missions. Along with everyone else, we're struggling for that kind of funding. If we are funded at the level at which we have requested, we should be able to demonstrate this capability in the International Space Station, we hope, by 2010.
We also have the possibility of putting the protocols on board the platform that was used to do Deep Impact. It was a spacecraft with a probe that was launched into a comet that was passing by, and the main platform captured quite a bit of data from the probe as it struck the comet. It may have blown up, I don't remember how they did it, but the idea was to examine the interior of the comet by blasting content out of it. That mission has been completed, and NASA has one other mission for this remaining space platform, but once they have completed that they've said it will be available to my group to test the interplanetary protocols. We'd like to get those two space-based tests done somewhere in the 2010 to 2012 period.
We're hoping that the demonstrations will be sufficiently persuasive that NASA will adopt these kinds of protocols in the future so that all the future missions will be compatible and be mutually supportive of each other, so if you complete a mission, the equipment is still there, out in space or orbiting around the sun or the planet. It's got processing and memory and transmission capability, so it could be incorporated into an interplanetary network.
The theory is if you standardize communications, you can use these various mission nodes as way stations in an interplanetary backbone.
Would it be even a close analogy to say you're taking space communications and bringing them from dial-up to broadband?
No, because that's only paying attention to data rates. It's not paying attention to going from a point-to-point tin can and a string to a switched telephone network. If you need to use a telephone analogy, that's closer to what it is than narrowband versus broadband. Quite independent of everything else, there is also this question of how much data can I get back from these various missions. There is a lot of interest in moving from radio frequency to optical, but unfortunately the budget squeeze has killed at least two missions that were going to test an optical laser that would be on a spacecraft in the general vicinity of Mars. I was disappointed that the tensions between the manned program and the robotic program have led to a fairly substantial diminution in the funding of the robotic program. A lot of science is not happening as a consequence of that. I wish we could either arrange for more money or different priorities to increase the support for the robotic missions, because they generally produce substantially more science per buck than the manned missions do.
With the probe that was fired into the comet — if such a probe were working with the system you're developing, would the quality and quantity of the information sent back be drastically increased?
It has the potential for increasing. In that particular case, it was a point-to-point link that transmitted the data back. We have some analytical demonstrations of mission configurations where doing the store-and-forward of the DTN protocols increases the total deliverable content by a factor of 10. It depends on the topology of the mission, what is in orbit, who can see whom and when. If you're always visible to the Earth, chances are pretty good a point-to-point link is going to work. If you're not always visible, then some store-and-forward links can substantially improve your ability to deliver data. We were motivated in part by the potential increase in total data transfer as well as the flexibility to support a larger number of devices in the system and to automate the whole process of routing, so that we don't have to manually schedule everything, which is what we do today.
I should mention one other thing. In an effort to make sure this was internationally acceptable and visible, we have a parallel-track effort to get the bundle protocols standardized in the Consultative Committee for Space Data Systems. That work is going on in parallel, and my counterpart at NASA, a man named Adrian Hooke, is pushing very, very hard to make sure that what we do is standardizable and acceptable in the international space community.
How did this all start coming about?
One of my engineers at MCI [where Cerf was a vice-president] had worked at the JPL in the summertime and I had started talking about an interplanetary internet in late 1997. I was going around lecturing on the idea that we ought to think seriously about how to design such a thing, and he heard about that. He said I should get in touch with his former colleagues at JPL, so we all got together in March of 1998 and Adrian led the delegation. Within about 30 seconds, we were finishing each other's sentences, so there was an instantaneous recognition that this was an important thing to do. Adrian had been around in the days of the Viking missions of 1976 and also had worked on getting very basic frame-type, or packet-like, communications standards into the interplanetary communications [system].
He'd gone some distance in that direction and wanted to go further, so we had similar objectives. We started this project in 1998, and I looked for funding from DARPA and got it, then we did several preliminary implementations and went through a series of iterations and went back to DARPA and said we think we should try this out in tactical communications. Where we are now is that we've demonstrated that it's useful in both a military context, and we have another experiment going on in the northern part of Sweden among the reindeer herders, the Sami. The Sami have lived in that area for about 8,000 years. We did some tests two years ago using these DTN protocols on the back of an all-terrain vehicle that was occasionally coming and going from a local village that had a Wi-Fi tower. The idea was to test the DTN protocols for dropping off and picking up content like a little data mule showing up once in a while.
It worked very well, so now we're planning another test with multiple villages in 2008. If that works well, we may try and find funding to deploy this DTN communication system, because when you're that far north, it's actually quite hard to get communications to work.
Which brings us to the potential applications of this on Earth. They primarily fall into mobile communications, where you are the most challenged on things like radio disconnection, interference, shadow where you're out of contact. The protocols are extremely robust in the presence of loss, and this is not the case for TCP/IP, which is pretty sensitive to packet loss.
A quick question for the uninitiated: is WiMax based on TCP/IP?
No, WiMax is a lower-level thing, it's a radio transmission standard. TCP/IP can run on top of WiMax perfectly well. If WiMax gets used in a mobile environment and it is not as solid as it is in a fixed environment, then DTN may turn out to be a preferred protocol to TCP/IP.
So we're actively seeking to get the technology readiness level up to where we can get it deployed. The fact that this may take up till 2010 or 2011 … it will have taken us 13 years by that time….I'm no stranger to how long it takes to get stuff like this to happen. Getting it standardized, pushing it through, dealing with people who think this is silly or say it's science fiction, or why do I need to do that, it's a replay of trying to get TCP/IP to happen. I've learned patience and persistence, and it counts.
Going back to the 1960s, is this something you could have envisioned?
Probably not. In 1964, that was when the deep space net was put up initially, and of course I knew about it because I was a big fan of space exploration and science fiction and everything else. So it wasn't hard to envision this, but the reason I didn't pick it up till 1997 was that I had got to thinking that we were in the middle of the big dot-boom and I could see the internet was going to take off. Even when the dot-boom went dot-bust it didn't matter, because the internet continued to grow and it still does — today 1.2 billion people are using the system.
So I got to thinking then, "Gosh, it took nearly 30 years to get here, so what should I be doing now that will be needed 30 years from now?" This is what popped up on my radar screen. The answer is I would have imagined it in the 1960s, but it would have been science fiction to have an internet-based platform. Having a radio-based platform was somewhat predictable, although they had never done it before until 1964 with the big 70-metre dishes. Those were all put together during the Apollo program. The real excitement for me was realizing in the late 1990s the wherewithal to actually do this was nearly at hand. Rather than being a matter of speculation, it was a matter of engineering.