Exploring ideas to speed up SSDV using the notion of ‘known images’.
First off this is just an idea I had while messing with the Raspberry Pi and SSDV and wanted to put it somewhere to start a discussion.
Although everyone's live SSDV pictures are different, at the same time they are equally similar over a certain altitude - One person's image of the earth at 80,000ft isn’t that different to the next. Sure the angle might be different, but generally they are similar. Could we use this fact to our advantage?
Could we create a set of ‘known images’, (categorised by altitude?), in order to have a reference point before each transmission. These images would be preloaded on the Pi before launch and also be on the central Habhub server. A general process could be:
- Pi takes photo
- Pi looks through its ‘reference images’ for a similar photo using a image difference algorithm
- Pi works out the ‘difference’ between the two images and along with the known image reference number, transmits this, hopefully a lot smaller, string.
- When habhub starts receiving the image, it identifies which of the known image has been used and starts to apply the difference algorithm to it. When all segments have been received then the original photo should be viewable.
Unknowns and starting points for conversations:
- Is the Pi powerful enough to look through the images and do the difference algorithm, in less time than it would take to just transmit the whole image as we currently do?
- Does such an algorithm exist?
- Do we have enough ‘known images’?
- What happens if some bits of the image fail to be received?
- Probably a lot more questions to be asked here!