Remote Sensing - is seeing believing?

Remote sensing is gathering information about the world from above. But how do we really know what we are "looking at"?

The fundamental problem is this. We want to know something about an object (how healthy are my crops, what minerals are in these rocks...) based on some radiation (e.g. sunlight, microwaves) that reflect off that object. There are two main ways of approaching this problem.

The first one is the intuitive one, that is, similar to the way we as humans sense the world. When a baby opens its eyes for the first time, it sees only patterns of light and color (perhaps with a little innate understanding). Over time, the baby learns through experience how to match these abstract images to meaningful objects.

With remote sensing, we can do something similar. Say we have satellite images of some fields, for example, and we want to know where they are flooded. To be sure we know what we are looking at, we first visit some of the fields at the time the image is taken, to confirm whether they are flooded or not. We then have some labels that we can match to parts of our image. By building up a set of training examples in this way, we can construct (or learn) a mathematical relationship between the images and the information that we are interested in.

The trouble with the learning approach, is that our mathematical models are still pretty basic compared to the intricacy of a human brain, and that learning requires lots of training. A computer often needs thousands or millions of training examples to figure out the complex relationships that exist between remote sensing images and the objects they are of.

Top- a remotely sensed image. Bottom- a machine learning algorithm has classified the image according to land cover. Images from this paper
The second way to approach the remote sensing problem is to try and predict those relationships before we even look at a single image. We can do this by taking what we know about the physics of light, the chemical make-up of our objects, the orbit of our satellite... and building up a model of what our image should look like.

To me, from a physicists perspective, this feels like the logical way to do it, because it aims to get to the root of things, tries to explain why the image looks that way. But again there are some problems.

The main difficulty with the first principles approach, is that the real world is very complex, and it is an overwhelming task figuring out exactly how to account for all the little effects of atmosphere, instrument design and natural variation in the objects we are imaging. Whilst we might be able to say for sure what an image of one particular tree would look like, there isn't really a perfect physical way to describe a tree in general.

So, which is better? Should we take the empirical route and look for patterns in our data, or the theoretical route and try to explain the underlying processes?

This question really comes down to different ideologies in science. There are those (especially mathematicians and physicists) who like to assume as little as possible, and try to proceed logically from one step to the next. And then there are those (especially in more applied sciences like ecology) who are more interested in observing the patterns of nature and working backward from there to the causes.

Remote sensing is a field that brings people from both ideologies together, and sometimes this debate leads to arguments over what makes a result 'valid'. However, I think this diversity of backgrounds and schools of thought ought to be a huge advantage- we have an opportunity here to look at the same problems from both angles, and hopefully meet somewhere in the middle!

Comments

  1. Merkur Safety Razor - DEccasino.com
    Merkur 메리트 카지노 Super Platinum Double Edge Razor. This Merkur Platinum razor is made from the 메리트카지노 Merkur Progress 500 series and is crafted 1xbet korean with the same €24.89

    ReplyDelete

Post a Comment

Popular Posts