Digital Solarography



Solarography (images of the movement of the sun across the sky, obtained for several months on photo paper using a pinhole camera) gained popularity somewhere from the beginning of the zero. And when in the 2010s this technique became really popular, many people again became interested in photographs using film and photo paper. Quite a lot of them began to leave banks with paper inside and a hole in their sides in the forests and public places of cities - and I also like this idea.

On Solargraphy.com you can find hundreds of wonderful examples of such work.

Here are some more links:

  • Interview with the creator of the site.
  • Interview with Jens Edinger on how to make and hide a pinhole jar (in German).
  • The Solarography Group at Flickr.
  • Motorized Solarography.
  • .
  • .
  • , , .
  • .

And although these pinhole cameras from beer cans and sewer pipes look very self-made, they can already be bought ready-made at the store. And, of course, ready-made sets make solarography a more accessible hobby, but in general it is quite difficult to make such a camera yourself.

However, although I love the pictures taken on film (or, as in this case, on paper), I got rid of all my analog equipment. Too much fuss with him.

How about making the same photo, only without film?

Theory


Task


Long exposure photography is easy to do. Reduce the sensitivity of the matrix to light and open the shutter for a few seconds. If you need to further increase the shutter speed, you will begin to suspect that the image will turn out to be terribly noisy. The next step is to take a lot of photos with short exposures and average. In this programmatic way, shutter speed of almost any length can be simulated. You can even take a day excerpt if you take a weighted average based on the exposure values ​​of each image. Cool! It is a pity that such an approach does not apply to solarography. The image of the sun “burns out” on the film [photo paper], and remains there forever, but when averaging, the bright point of the sun will go away due to averaging and it will not be visible with long-duration digital emulation. Damn it ...

24 hours exposure:



Processed result:



How do we solve this problem? When creating individual photos, you need to track those points that will be "burned out", or solarized. Together with each photo (with the correct shutter speed) we make one more - with the minimum possible amount of light reaching the matrix. We assume that each photon that reaches our matrix in the second case with a darker photo can be considered bright enough to leave a mark on the film.

Let's digress for a second and talk about what is exposure number (EV). For a photo with the correct exposure taken for 1 with an aperture of f / 1.0 and ISO 100, EV will be 0. Half a second with the same settings will give EV 1, a quarter of a second will give EV 2, ... On Wikipedia they write that on a cloudy day EV it will be about 13, and in the sun it will be 16. The standard digital mirrorless lens can give an exposure of up to 1/4000 second, most lenses have an f / 22 aperture, and the lowest ISO value - 25, 50 or 100. At a shutter speed of 1/4000 s, aperture f / 22 and ISO 100 EV will be equal to 20-22. Therefore, we can use EV as a measure of the amount of brightness of the scene (with the correct exposure) - and at the same time as a measure of the maximum brightness that the camera can withstand without exceeding the exposure. In fact, this is the number of photons reaching the camera,and the number of photons that the camera successfully blocks during exposure. What should be the EV so that we can reliably determine which parts of the film will be burned out? In practice, the cleaner the sky, the less clouds and haze, suspended particles and water droplets in the atmosphere that reflect light - the smaller the maximum EV cameras can be. Therefore, a camera with a shutter speed of 1/4000, aperture 22, and ISO 100 will catch so few photons that we can assume that a certain part of the image is incredibly bright.Therefore, a camera with a shutter speed of 1/4000, aperture 22, and ISO 100 will catch so few photons that we can assume that a certain part of the image is incredibly bright.Therefore, a camera with a shutter speed of 1/4000, aperture 22, and ISO 100 will catch so few photons that we can assume that a certain part of the image is incredibly bright.

But each part of the cloud, illuminated by the sun, also becomes unrealistically bright, and if the camera can not reduce this brightness, then we can not reliably determine whether this point was bright enough to leave a mark on the film. In fact, of course, she would not have left a trace, but we cannot reliably distinguish between a bright cloud and the sun. In my experience, if the lighting conditions are not known in advance (as usually happens over the European part of the continent), we need to get an EV of at least 24.

However, there is a simple way to move the window of possible EV values ​​- a neutral filter. It significantly reduces the amount of light reaching the matrix, so the camera will not be able to receive images at dawn, at sunset or at night - but in our case this is not important, since these images will not affect the shutter speed for several days (in comparison with the bright day contribution to the final image is negligible). When using the ND64 filter (2 6 ), it deletes about 6 EV (the exact value cannot be called with ND filters), and this gives us a maximum value of EV 26. What will it look like?


Image with the correct shutter speed and EV 11


A bit darker (EV 14)


Close to what digital cameras can give (EV 19)


And here is our filter result - EV 26

Is this enough? Yes, I think so.

Program


So how do you handle all this? You need to take a photo with the correct exposure every X seconds, and immediately after that - a photo with EV 26. From the first photos, a slow shutter speed is calculated based on metadata based on metadata. EV can be calculated according to EXIF ​​data, add an offset and use a two in the degree of offset EV as a weight to average pixel values.

This will not work with the second images - then we will average all the “burned out” pixels. Here we simply overlay all the images and save the brightest pixels of the result.



After that, we simply impose the second on the first:



Drop dead! But how many images do we need and how often do we need to shoot them? The interval depends on the focal length (the wider the image, the smaller the sun, the wider the gap). In my case, for a wide-angle picture (about 24 mm), the minimum interval from my point of view was 60 s, and the ideal one was 45 s. If you take a gap of more than 60 s, the arch of the solar path will turn into overlapping circles, and in the limit - just a string of pearls. You can, of course, cheat, and apply Gaussian smoothing to the image with the solar path in order to smooth the corners and smear the solar circles.


90 s interval: artifacts (large gaps caused by cloud cover the sun)

The number of images with a slow shutter speed depends on the movement, but from 60 to 90 pieces works well even for the smallest details.

Iron


Not bad. Now we have a real way to get digital solarography. But we still need to get real images. How to make a (relatively) disposable camera, counting on the fact that there can always be annoying birds or even more annoying orderly servants who will drag her away? According to some reviews by enthusiasts, they lost 30 to 50 %% of cameras left in the wild for six months (for the period from winter to summer solstice, i.e. from the lowest to highest position of the sun in the sky). I’m not counting on six months, but it's still worth preparing for the loss of a couple of cameras. The smallest camera in size and cost can be assembled from the Raspberry Pi Zero with the Pi Camera Module. It will be “whole” 8 megapixels, however, and well - we still do not need clear sharp photographs. Plus electronics for inclusion at set intervals,a battery, a fake lens from a smartphone and terribly strong neodymium magnets, all in a case printed on a 3D printer.











Technical Details The Raspberry Pi HAT with a SAMD21 microcontroller (chip with Arduino Zero) is powered by two 18650 batteries and turns on the Pi every 60 s (if it’s light outside), or less often if it’s dark. Pi loads, takes a few photos, and turns off. The system runs on batteries for 2.5 days, and generates 10 GB per day. In order to boot quickly enough, measure light exposure, take a few photos, save, and turn off - all in 60 seconds - a minimal buildroot distribution is installed on the computer instead of the greasy Raspbian.



The most difficult thing in such a project is to make a case that is printed on a 3D printer and protected from weather conditions. I got a good option - I used a 3 mm gasket from ethylene-propylene rubber (EPDM) in the recess provided in the housing.





Images


Examples taken in Weimar:



















Problems and weaknesses


To determine the “scorched” pixels, I used separate frames. Either a trace remained on the image, or not. I did not make accumulative measurements. If moving cars are seen in the camera, an effect comparable to the behavior of real films appears. When reflections from glass and metal give a scattering of small bright points, then this noise, which fell into several dozen photographs, is not so noticeable to the eye. The following photo taken by Michael Wesley gives us a good example of how this can be seen on film:



I also want to!


Cool! True, you need to work with your hands. Resources:


All Articles