A Slice of Space

We are very close to take off, promise! But we're not going with our spacecraft, we'll be navigating from the ground. Therefore, we're going to equip our spacecraft with a program that will enable us to get a read on which direction we are heading, how fast we are going, and where we are! First, which direction are we heading? We have acces to a camera aboard our space craft. But the thing about cameras, is that they tend produce be, well... flat images. And real life, as you may have noticed, is not flat. This is normally not a problem if you simply want to take a picture of a friend or your lunch, but when in space, figuring out where we are and where we're looking we need to account for the fact that space isn't flat.

Bildet kan inneholde: atmosf?re, himmel, bl?, verdensrommet, atmosf?risk fenomen.

Have you ever read a map? Maybe you were forced, during physical education, to read a map and run at the same time (Torture? Who's to say)? Or maybe your history teacher pulled down a map and drew all over it with chalc in a strange attempt to explain the second world war?

Point being, most of us have seen a map of some sort in our lives. And you probably noticed that the maps are flat, but our Earth is not (if you can't agree with us here, good on you for questioning things i guess(?), but there is no point in you sticking around). Now, flat maps make sense, nobody wants to run orientation with a whole globe, but have you thought about how we can take something round and print it on a flat surface? And on how they may distort our view of things? (I mean, is Greenland really is the size of Africa?)

This sort of transformation from the surface of a sphere to a flat surface is called a stereographic projection. In order to get an idea on the direction our camera is facing, we need to make a stereographic projection of the sphere of our sky! This way, we can compare the image taken from our space craft, to the map we've made of the sky. We can make this map before launch from the images sent to us from a sattelite orbiting our planet. When this sattelite captures an image, we know at what angle around the equator it was taken, and so we plan on comparing these images to the ones our space craft will capture when it is in space, when we find the most similar image, we know what angle we are at!

Bildet kan inneholde: tekst, linje, skr?ningen, parallell, skrift.
Notice how each variable has a specific range that allow us to move to any point on a sphere. 

First, we need to find a good way to relate spherical surfaces and flat surfaces. You might recall we've talked about polar coordinates, which are useful when looking at circular surfaces? This time we have a spherical surface and will therefore use something called spherical coordinates. There are many ways to define a spherical coordinate system, but generally we have an axis where we have two angles that rotate around it with different ranges. We will use ranges \(0\leq r\leq\infty, 0\leq\theta\leq\pi, 0\leq\phi\leq 2\pi\) for our spherical coordinates, as seen in the picture above. Keep in mind that we will only be moving in the xy-plane, and therefore we only rotate \(\phi\) around the z-axis, while \(\theta\) remains constant at 90°.

We can relate planar coordinates (X,Y) to spherical coordinates \((\theta,\phi)\) using a stereographic projection. This map transforms coordinates on the surface of a sphere to coordinates on a plane.

Bildet kan inneholde: tekst, diagram, linje, parallell, skrift.
Three-dimensional illustration of our stereographic projection. See how each coordinate on the sphere has a corresponding coordinate on the flat surface.

There are several conventions for stereographic projections, but we will utilize a method where let the center of the stereographic projection (where X=Y=0, aka the origin), transform a specific spherical coortinate \((\theta_0,\phi_0)\).

Curious about how this transformation is done? Check our this post!

The boundaries of the projection are dependent on the camera's field of view, which is defined as the maximum angular width of the picture. You can think of it as how wide our camera can 'see'. This will therefore also limit how long X and Y will be!

You may question how we get any sort of number or coordinate out of a picture.

The little square from the picture above, centered about the spherical coordinates. Our \(\theta_0\) is 90° and \(\phi_0\) is 0°.

An image consists of several pixels, in our case 307200 of pixels, which together creates an image 640 pixels tall, and 480 pixels long. Each one of these pixels has three different values, one for red, one for green, and another for blue. Said values have a number between 0 and 255 which indicates different levels of the color. For example, (0,0,0) would be black, while (255,0,0) would be red!

Our space craft actually gives us RBG values for a picture it captures. This long list of numbers, has for any given spherical coordinate a specific RBG value for the pixel in this point!

We know the dimensions of the image our space craft produces, its width and height, and we use these to grab values from our skysphere in order to make our map of the sky.

Using formulas for coordinate transformations between spheres and planes, we can convert the values we get from the image of the sky, to flat images.

Using coordinate transformations, we find X and Y for values of \(\phi\) from 0° to 360°, as we figure one picture per degree should be enough to be able to compare whatever picture our space craft takes. This produces 360 stereographic projections of our sky!

To give you a visual idea of what happens, we have one picture taken by our space craft at \(\phi=0\)°, and we put it next to the picture we mapped from the skysphere at \(\phi=0\)°:

Bildet kan inneholde: atmosf?re, himmel, verdensrommet, atmosf?risk fenomen, rom.
Picture taken by the space craft
Bildet kan inneholde: atmosf?re, himmel, verdensrommet, atmosf?risk fenomen, rom.
Picture from map

 

They are the same! Only problem is, we don't usually know the angle the space craft takes the picture at when it is in space. How can we find this?

Since we know the angle of all the pictures in our map, we can compare any picture taken by our space craft to these and deduce the angle from this. The pictures will not be exactly the same, so we need to look for which picture in our map has the least difference between it and the picture taken. In order to find the most fitting image we can utilize the least square method, as we did when we wanted to analyze information about the light from a star. The input given to the least squares method was specific to that particular situation, but the algorithm is the same:

\(\Delta_{picture} = \sum\limits^{360}_{i=1} [RGB_{satellite \, picture} - RGB_{skysphere \, picture \, number \, i}]^2\)

We have the RGB-values from picture taken by the satellite, and the RGB-values from the 360 pictures of our skysphere. We go through each of the skysphere pictures and subtract them from the satellite picture, square the difference (we only want the absolute value of the difference), and add them. We then wish to find at which point the difference \(\Delta_{picture}\) is the smallest. If, say, picture number 248 gives us the smallest difference, the picture from our satellite is taken at \(\phi = 248^{\circ}\).

If you thought this was confusing, don't worry, we were confused too. Imagine your friend went to the forest, and came back with a lovely picture. You want to see the place in the picture but your friend can't recall which way they were facing when capturing the image. Luckily, they recall exactly the place they were standing. (Shouldn't they recall where the picture was taken then? Don't ask us, ask your friend.) You can now travel to that spot, and take a picture while you turn slightly each time, 360 times, untill you have an image for every direction. Now, you can compare your pictures to your friends picture, and when you find the most similar one, you know what direction your friend was facing when taking the picture.

This is what we are doing! Depending on our point of view, we have now transformed spherical coordinates \((\theta,\phi)\), to planar coordinates, \((X,Y)\), and made a map we can use to navigate! Our space craft captures an image, and we compare it to all the images we have, and find the most similar image from which we know the direction we are facing!

What our sattelite sees as it rotates.

 

Av Semya A. T?nnessen, Marie Havre
Publisert 11. okt. 2020 12:57 - Sist endret 11. okt. 2020 14:47