NASA: National Aeronautics and Space Administration

  1. Robotic Sun Worship

    In the third segment of Astrobiology Magazine’s four-part interview with roboticist David Wettergreen, he discusses a project to design a robot that, quite literally, followed the sun. Wettergreen is an associate research professor at Carnegie Mellon University, where he works in the Field Robotics Center.

    Astrobiology Magazine: One of the projects you worked on, a robot named Hyperion, was designed to take maximum advantage of solar power. But a lot of solar-powered robots had already been built before you designed Hyperion. How was it different?

    David Wettergreen: Hyperion was a very simple, lightweight (150 kg, 330 pounds) rover with a large solar panel that stood upright, like a sail. The solar panel was mounted vertically for operation in polar regions where the sun is low on the horizon. We chose to fix the solar panel to the robot. We could have had the panel move to point at the sun. But instead we fixed the panel to the robot so that the robot itself had to change its course when it wanted to receive solar energy. This simplified the robot mechanism, but it meant that the robot had to reason about the availability of energy, estimate the power required to take on some action and make intelligent decisions. For instance it had to reason about: If power is running down, should I sprint for the top of the hill and get up in the sun? Should I slow down and try to wait this out? Should I creep a little bit over here, realizing that maybe the sun is going to hit me in a couple of days and I’ll get charged up? Paul Tompkins developed the planner that could figure this out.

    We first tested Hyperion on Devon Island, in the Canadian high Arctic. [Ed. note: The Devon Island test site was above the Arctic Circle, where during northern summer, the sun never sets. As the Earth rotates, the sun appears to circle around the sky over the course of a day.] At that latitude (75.36°N), the simplest course that guarantees that you always have solar energy is to just drive around in a circle on a 24-hour cycle. But that’s not very interesting. So you need to think about where you want to go and what your science objectives are, but you also need to think about the resources that you need for driving. It becomes a little bit more like sailing. You tack against the sun using the energy that you’ve stored up until you get to a scientifically interesting site. Then you face the sun and charge up. You have to figure out how long you want to sit there and charge. And then maybe you have to move because the shadow from a hill is coming. You can put together very complex paths if you’re able to reason about your navigation and your resources all at once.

    AM: The robot was able to do all that autonomously?

    DW: It did all that autonomously. It had a battery capacity of about 2 hours. It could survive 2 hours without any sunlight. We demonstrated running continuously for 24 hours and covering substantial distances. One circuit was 6 kilometers (3.7 miles) – and not just going in a little circle; it went all over the place. We did another traverse that was 9 kilometers (5.6 miles) in 24 hours. Those are rates that are well over what you would need at southern latitudes on the moon to chase the sun.

    AM: What happened with that project?

    DW: Well, it has continued in some ways. Following the sun-synchronous navigation project in 2003 we began the Life in the Atacama project. That was an ASTEP (Astrobiology Science and Technology for Exploring Planets) project looking at the distribution of microorganisms in the Atacama Desert. One strategy for doing that would be to figure out the most promising place, go there, and dig and dig and dig. With that approach you could measure to very, very high precision the number of microbial spores per gram of soil. It might be an almost unimaginably small level compared to the abundance of life everywhere else on Earth. But if you look closely enough, eventually you find something.

    We had a different strategy, which was to sample at many locations but to do it very rapidly with less resolution. In some areas you’re going to say, Okay, I don’t detect life here, but in other areas life will be more abundant, and you can start to map out the distribution of life. At the same time you can look at things like topography, solar radiation, humidity, temperature variation, soil composition, and rock-type habitats vs. sandy soils to try to understand the nature of the habitats and the factors governing microbial populations.

    So the conceptual approach of the Life in the Atacama project was to build a survey robot: something that could go fast, access a lot of places, take many measurements over broad areas, and create biogeologic maps. We created a robot named Zoë, which is similar in appearance to Hyperion, but rebuilt to support its instrument payload, and with a horizontal solar array, which is best for an equatorial environment where the sun is overhead much of the day.

    It carries an instrument called a fluorescence imager created by Alan Waggoner, which used a high-intensity flash lamp and a high-speed, cooled CCD camera to create images of fluorescence in daylight. The fluorescence imager could detect chlorophyll, which is naturally fluorescent; and Zoë could apply chemical reagents to the soil that would bind to various other organic molecules and then fluoresce in different wavelengths. It created images that showed the abundance and location of DNA, carbohydrates, proteins and lipids. It also has a plow, so it could dig, and a spectrometer, so it could measure mineral composition directly.

    AM: How did you control Zoë?

    DW: The whole of the Atacama project was done with autonomous navigation, so there was never anyone at the stick driving the robot. We had a science team in Pittsburgh led by Nathalie Cabrol of NASA Ames and they would look at satellite imagery and decide, This area is interesting and that area is interesting. So they would create a plan: go to that area, take some samples, and then traverse across here and take samples as you traverse, and then go there. We created command sequences similar to the ones NASA uses for the MER rovers. We uploaded those to the robot and then it went and did it autonomously. Until something would break, then we’d analyze the problem and fix it, and pick up with the plan again.

    Zoë was sometimes traveling over ten kilometers a day. At the beginning of the day the rover could not see where it was going to be at the end of the day, or much of the area between its starting point and its ending point. The only model of the terrain that Zoë had prior to the investigation was a satellite image at 30-meter resolution. It knew enough to avoid a hill that you could see in the satellite images, but all of the local obstacle detection, obstacle avoidance and route-finding was done by the rover. Scientists would create the rough plan and what would come back is that the robot got around these hills and through this valley and got over to there.

    Along the way it would make some observation of the local environment and try to make some good decisions about specific targets. If it detected rocks of a particular type, that would trigger some fluorescence imaging. Where it detected the signature for chlorophyll, it would automatically go on to take additional measurements. So in a sense it was making intelligent observations on its own.

    AM: Do you consider the project a success?

    DW: Yes. We mapped about six different areas of the Atacama and we found that we can essentially create a map of life. We showed that we can interpret results on the fly, and that you get better science return when you use intelligent sampling methods vs. just a blind strategy.

    AM: There’s a lot of scientific interest in the habitability of the Atacama. Are there plans to continue the mapping project there?

    DW: It would be great to do a lot more mapping. But in this type of program our goal is to develop the concept and the technology and get out in the field and show that it can work. So that’s were we are now. We have a system that can do large-scale mapping, so we’ll see what’s next. We’re a concept-development shop; we do the field deployment and testing, but it’s up to NASA to decide whether they want us to look for more meteorites or go map more of the desert. My experience has been that their tendency is to do the next new thing. This year that looks like the moon. But we don’t start from scratch every time, the fundamental technologies are advanced and algorithms and methods get better.

Write a Comment

Name
Email
Website
Message

Page Feedback

Type
Name
Email
Priority
Comment
Assign To