In a previous post, I wrote about how I wanted to use Machine Learning to detect when a deer is present in my back yard, so that I can make sure that no deer-built weapons of mass destruction are constructed in my yard (at least not on my watch!). After initially using a Microsoft 360 Kinect for my first data collection phase, I ran into a problem: deer (and many other animals) are actually nocturnal. The Kinect camera, while easy to use, doesn't have a sensitive enough IR camera to see in the darkness of my backyard, even with the help of some nifty IR emitters.
To solve the problem of night vision, I wrote about how I set up a Raspberry Pi with a NOIR camera to keep track of the murderous deer in the yard both day and night.
While the camera and IR emitters work great for capturing pictures, I ran into a timing problem. The NOIR camera requires very different settings between day-time and night-time exposures.
During the day time, I can use the camera's automatic white balance feature, along with an ISO of 100. However, at night I need to tweak the camera to use a very specific white balance, and adjust the shutter speed so that it stays open for nearly 3 seconds.
The main challenge comes when it is time to swap from day-mode to night-mode automatically. If I leave the NOIR camera on night mode for too long into the sunrise, I get images like this. Is it a nuclear explosion with a dreaded EMP!? Oh, no, it's just dawn.
Right now, I set the camera on a schedule. Using a small Python script, I get the camera to take a set number of pictures before stopping, and then swap settings from day to night, or night to day. This works okay, but there is one glaring problem – the sun rises and sets at a different time every day! Pesky sun!
While I could solve the problem by hooking into a web service that tells me when the sun is about to rise or set, I think a more interesting solution is to have the Python script look at each exposure, and swap the day and night settings by itself.
Note that this sounds like I’m proposing some sort of self aware program... but I’m not talking about Skynet levels of intelligence – I don’t need to be avoiding murderous deer and a runaway, self-aware, weakly god-like super-intelligent Raspberry Pi. I’m thinking more along the lines of using color histograms. Nice, simple, and (usually) non-deadly color histograms.
The problem of when to transition between camera modes boils down to sampling an image and calculating the average pixel intensity. When it’s dark out, most of the image will become very dim, resulting in a color histogram that is close to black. When it becomes light outside, the opposite is true – the color histogram will be close to white.
So, how does one actually compute a color histogram? It’s easy! Usually, images are stored (or can be transformed into) an RGB format where each pixel has 3 color components: R (red), G (green), and B (blue). Each of these components has an intensity value that will usually take on a value between 0 and 255. For example, if there was a pure red pixel, its color value might be R = 255, G = 0 and B = 0. Pure black is R = 0, G = 0, and B = 0. Pure white is R = 255, G = 255, and B = 255. A nice brown deer color would be R = 130, G = 88, and B = 64.
To compute a color histogram, the program needs to loop through each pixel in an image and read the red (R), green (G) and blue (B) intensities, summing up the totals for each intensity level. The result is something like this:
Intensity | Red | Green | Blue | ----------+----------+------------+------------+ 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | ... | ... | ... | ... | 237 | 8101 | 5949 | 5940 | 238 | 15527 | 8518 | 8842 | 239 | 214433 | 13467 | 16682 | 240 | 24592 | 24301 | 208580 | 241 | 38842 | 50694 | 59539 | 242 | 7938 | 183090 | 14318 | 243 | 6045 | 750 | 5157 | 244 | 2121 | 8 | 3872 | 245 | 1550 | 0 | 2063 | 246 | 779 | 0 | 1280 | 247 | 368 | 0 | 1022 | 248 | 311 | 0 | 721 | 249 | 168 | 0 | 451 | 250 | 132 | 0 | 350 | 251 | 84 | 0 | 257 | 252 | 80 | 0 | 207 | 253 | 40 | 0 | 149 | 254 | 28 | 0 | 125 | 255 | 73 | 0 | 469 |
This is actual histogram from my nuclear dawn picture I showed above. I’ve truncated the data above and cut out intensities 4 – 236. Interpreting the chart is fairly straightforward. For example, there are 183,090 pixels that have a G value of 242, meaning that there is a lot of high green values. Clustered around the 240 intensity level are the bulk of the RGB values, which I would expect if the picture has a lot of white in it (which it does).
Actually accessing the RGB data in a program is easy – it turns out the OpenCV project has a Python library called
cv2 that makes this possible:
import cv2 image = cv2.imread(filename) blue = cv2.calcHist([image], , None, , [0, 256]) green = cv2.calcHist([image], , None, , [0, 256]) red = cv2.calcHist([image], , None, , [0, 256])
This produces the histogram information for the R, G, and B components of the image. What is more useful is to plot the histograms out to give you a feel for where the main bulk of the color intensities are located. MatPlotLib is the Python package to use. Here is what the histogram data look like:
Notice how the red, green and blue components are all really high and clustered together! This is important! Now, let’s compare it to another picture. This is one where the camera was running in day mode, and it started to get dark out. Here is the picture:
Now let’s take a look at the histogram data:
Notice that the histogram data on both images tends to polarize to one end of the spectrum or another – for the nearly white picture, the bulk of the components are near 255. For the dark image, the bulk are near zero.
I can use this to my advantage in the Python program. All I have to do is see if the intensity values for each color component are near 0 or 255. However, for this to work, I need to calculate the weighted mean of each of the R, G, and B components, and compare them to some cutoff values.
The weighted mean calculation works similar to the usual mean calculation. The difference is that instead of just summing up all of the counts for say the R component, I multiply the count by the intensity value, perform the sum, and then divide by the total number of pixels we have. So, for the R component, the calculation becomes:
R_weighted_sum = ((0 * R) + (1 * R) + ... (255 * R)) / sum(R)
I just repeat that weighted sum calculation for the G, and B components. For example, the weighted values for the nuclear dawn photo is as follows:
| Red | Green | Blue | ----------+----------+------------+------------+ Wgt-Mean | 238 | 237 | 240 |
All that I needed now was to get my Python camera implementation to check for histogram data every few snapshots, and calculate the weighted sum of intensities. To get it to transition into day mode, I look for the average intensities of the R, G and B values to be greater than 230. For the night mode, I look for the average intensities to be less than 30. When it detects either of those situations, it will adjust the camera settings accordingly.
As an example, I created a time-lapse view from 6:00 am to 6:12 am. The program bounces back and forth a little between modes until it becomes bright enough in day mode to use the automatic settings (note that the animation loops!). I can tweak this a little by using slightly different cutoffs for the pixel intensities:
The flicker back and forth is because it’s still too dark for day mode. As the sun gets brighter, the program stabilizes in day mode. The full source code for the program is available in my GitHub repository. Check it out and feel free to experiment with it!
Using histogram data to swap camera modes is very easy. I showed how to compute the histogram data, and calculate the weighted mean of the intensity values. With both of these tools in hand, it’s easy to determine whether the camera should switch to night mode or day mode based on the light values! As long as the deer doesn’t figure out how to blow up the sun, I can leave my Raspberry Pi camera program running until I run out of disk space. This will collect good evidence if the deer manages to murder me, or, more likely, eats all of my vegetables.
Check back soon when I will (finally) discuss the Machine Learning component of the project.