Back in July 2020, I set up an old phone and an assortment of other parts in a new tree plantation to watch the trees (and grass) grow. Over the following three years it has taken well over 196,000 photos.

Selected photographs have been combined into High Dynamic Range (HDR) images and these used as frames to create a video. This page will be updated as I obtain and process more photos.

I find it fairly fascinating to watch the trees shoot up out of nowhere and the grass change colour and grow with the seasons, before disappearing again as it is mown (generally takes a couple of watches to spot many details).

Camera design

The camera freshly installed in its corner of the plantation.

The camera freshly installed in its corner of the plantation.

The camera is made from various parts lying around, mainly comprising of an old android phone, car usb charger, cordless drill battery, very crude charge controller (constant current supply based on an LM317 and a few diodes) and solar panel.

The Automate application is used to schedule when photos are taken. When it is time, this calls a shell script that simulates a button press to turn on the screen, launches the take photo activity of the Open Camera app and waits for it to take 3 photos using exposure bracketing. Once this is done, the script closes the camera app and turns off the display again.

Logging is performed for analysis and prevention of issues. These logs include those from the Automate app, shell script and Battery Log app. Observations and notes during servicing are also written down.

Processing workflow

Image processing is undertaken with various pieces of software manged by shell scripts and a makefile. Seeing as this is an ongoing project, I have automated the process as much as possible to make it easier to generate videos more frequently.

  • Once every couple of weeks to couple of months I retrieve the photos and logs from the phone.
  • A shell script is used to select the photos that are required based on the dates they were taken. All photos are used at the start and end, while only one per day is used for the rest of the time. For each photo selected, the exposure bracketed images are combined using Luminance HDR. Combined images are cached between processing runs to improve performance.
  • ImageMagick is used to generate the title and end slides based on the image selection parameters in the shell script.
  • For each frame of the video, a symlink is made to provide consecutive frame numbers without having to rename the actual image files.
  • timelapse-deflicker is used to reduce some of the flicker and exposure differences between frames.
  • FFmpeg is used to combine each frame into a video. The drawtext filter is used to add the date of each frame’s capture to the video. FFmpeg is also used to add audio from the Youtube Audio Library.

All videos I have processed and uploaded so far