January 2024 update

Testing Pi-lomar on the Raspberry Pi 5

Will Pi-lomar run on a Raspberry Pi 5?

Spoiler alert! No.

Not yet.

The camera and GPIO libraries have changed, but how close is it to working?

Interestingly more of the required packages that made the RPi 4 Buster build tricky seem to be pre-installed in Bookworm now. I only added opencv, astroalign, pandas and skyfield, and they all installed cleanly, no conflicts or special tricks needed.

sudo apt install python3-skyfield
sudo apt install python3-opencv
sudo apt install python3-astroalign
sudo apt install python3-pandas

The resulting build script will be much simpler I hope. I’m still installing globally rather than creating containers because the RPi will be dedicated to the telescope.

The pilomar.py program of course errored out fairly quickly, but with relatively little change I got it up and running as far as the first menu. That includes all the data loading and formatting that has to happen when you first run the software.

Right out of the box I have to say “wow!“, I’m impressed.

For comparison: The 2GB RPi 4B with the 32bit operating system takes about an hour to calculate the Hipparcos catalogue of 100000+ stars. On an 8GB RPi 5B with 64bit operating system, it ran in 25 seconds, so fast that I thought it had failed, I had to speed up the progress messages to prove it was doing something. From nearly 60 minutes down to 25 seconds! In regular use I’d estimate Pi-lomar runs about twice as fast on the RPi5.

It looks like the basic migration should be straight forward, and there is capacity there for extra features.

Raspberry Pi 5 Active Cooling hint!

The official cooling unit is great – it’s very easy to attach to the RPi5. BUT – you can’t detach it. So, if you’re thinking of later putting it into a case or occasionally reorganize things, be very careful.

For example: I like the Pibow cases, but a couple of design choices clash. If you connect RPi+Cooler first: You cannot fit all the layers of the case.
If you connect RPi+Cooler second: You cannot remove all the layers of the case, and the camera connectors become more difficult to access.

Next time I’ll change the little spring-loaded feet for nylon bolts so the cooler can be removed – that’s the fundamental design flaw to me.


Back to the RPi 4B version

Motorcontroller PCB

The published PCB as it arrives from a manufacturer. No components, you have to add those yourself.
Unpopulated PCB as received from the manufacturer.

The first PCB design is done and the Gerber files are now in the GitHub project for the PCB. These files can be used to manufacture the PCB. It still needs to have components added, but the wiring is all set in the PCB itself. Many thanks to Dale, Mark, and Ton for their help with the designs so far.

The published PCB has a few improvements on it.

  • Full ground plane on the underside of the PCB.
  • 12V supplies have more copper too.
  • The unused ADC0 pin on the Tiny2040 is now available in the expansion section for your own use.
  • A number of GPIO pins from the RPi header are now exposed in the expansion section.
  • Some development features (LED and motor power measurement) are removed.
  • PCB connector blocks have been standardised.
  • Printed warning to take care when connecting USB and GPIO at the same time.
  • NOTE: On the published Gerber files, the ‘R1’ resistor is marked with a lower value than these images show. Any value from 320 – 680 Ohms seems to work fine. The lower the value, the better the transistor switches.
Published motorcontroller PCB with components populated.
Populated at home with the necessary components.

I have added a new folder to the GitHub project to contain the Gerber files.

https://github.com/Short-bus/pilomar/tree/main/gerber

The files for the PCB are in the /gerber/PCB-2023-12-14 folder on GitHub. You must generate a .zip file from here to use for manufacturing.

cd gerber/PCB-2023-12-14
zip PCB-2023-12-14.zip *

The PCB-2023-12-14.zip file is the one that you should submit for manufacturing.

The file gerber/readme.txt explains more about the manufacturing specifications you will need to provide when placing an order.

A second PCB design is still in testing at the moment, this one eliminates the separate Raspberry Pi power supply. It adds a buck converter onto the board to act as the RPi’s power source. Everything runs from the motor 12V supply.

Software development

At the end of December I released a few improvements to the software, fixing a few issues that the early builders found. I think people should be taking their first images soon, so I’ve done a little more development in January to help tuning the telescope.

The tracking algorithm works for me, but I suspect that it needs finetuning to individual observing conditions. There are some great sounding locations where people are building Pi-lomar at the moment. So I’ve started adding some simple tools to help get the tracking parameters right. The idea is to show the results of the tracking calculation and the related parameters which can impact how it works. (I must explain how the tracking solution works soon)

Getting the telescope working south of the Equator! I am at 50+ degrees North here, out of extreme caution I put a warning on Instructables and in the software that the telescope might not work if you go too far south. But there is interest to make copies in the Southern Hemisphere. So with help from volunteers I’m looking at addressing some minor irritations with how the telescope behaves as you move further south. It looks like Pi-lomar will work already – but with a movement reset while tracking objects through due North. So the January release will accept Southern latitudes for the home location now and just warn you that it’s still under development.

There’s now a parameter “OptimiseMoves” – when you turn that parameter on the telescope will move much more freely through due North which should eliminate some irritations.

Example of the warning message that pilomar.py generates if the OptimiseMoves parameter is enabled.
Screenshot showing the warning if the new OptimiseMoves parameter is enabled. Should make smoother operation when making observations facing north more of the time.
Diagram to explain the motion differences when the OptimiseMoves parameter is used in Pi-lomar.
The effect of the OptimiseMoves parameter. By default the telescope will not pass North (360/0 degrees), it will reverse back to the other side and resume there. Enabling OptimiseMoves allows the telescope to rotate freely past North in either direction.

I’ve opened discussions on the GitHub site for anyone who wants to join in. When the feature is fully developed and proven to work that will become the normal operation everywhere.

The January improvements will be merged back into the main branch in the next few days.

Actual observations

It has been almost constantly cloudy here for months now. And the back yard is turning to mud, even the dog is reluctant to go out there. Really frustrating! I’m SO desperate to get out and get some more images captured. Nights on the east coast seem to come in three flavours…

  1. Too cloudy, calm, no moon.
  2. Clear, too windy, no moon.
  3. Clear, calm, full moon.

I’m hoping that some of the other builders will start capturing soon, maybe people can share those images too.


Hardware developments

Upgrading mk1

I have two completed Pi-lomar telescopes at the moment. After a break from 3D printing, I’m returning to the earlier build to upgrade it. The drive mechanism now feel less smooth than the Instructables version. That’s a relief! All the tweaks I put into the Instructables version made a difference. So I’ll be tearing mk1 down and testing out some further improvements to the drive and telescope tower. I’ll take the opportunity to widen the camera cradle – it will allow more room for higher quality lenses, and also let me test out the idea of an RPi5 with twin cameras later this year.

Prototype 3d printed part. This is a twin camera cradle idea. When connected to a Raspberry Pi 5 the telescope could support 2 cameras, each one optimised for different purposes.
First version of twin camera cradle. This will need a RPi5, but could run a 16mm lens for targeting/tracking and a separate 50mm lens for observations. (I hope!)
3D printer underway. Printing a new version of the tower walls to make more space inside the dome for more and larger lenses.
Modified design for tower walls under way to make space for the twin camera cradle.

Removing the infrared filter

Finally time to rip that infrared cut-off filter out of a Hi Quality sensor. The official instructions work, it is simple to do. The lens mount comes off the sensor board easily and the IR filter pops out cleanly with gentle push. I have left the sensor exposed, protected only when a lens is attached. I may try to re-cover it with OHP film as suggested as exposed sensors are dust magnets! I put the sensor inside a fresh clean freezer bag to minimise dust when making the mod.

I placed the 16mm telephoto lens on the sensor and stuck it on a tripod just to see what things looked like. Everything has now gone ‘pink’ so SOMETHING has changed anyway!

A very quickly captured image of the sword of Orion. Just moments after removing the infrared filter from a Raspberry Pi Hi Quality sensor. The sky has turned pink, so there is definitely a change in the wavelengths being captured. Just waiting for less moon and less wind to test it properly.
Infra red filter removed from Raspberry Pi Hi Quality sensor. Tripod mounted photo of Orion’s sword with 16mm telephoto lens and moonlit sky. It’s all gone PINK, that must be good right?!?

It’s not clear how wide the HiQ sensor’s infrared sensitivity is, but I think any expansion of wavelengths will be interesting to play with.

Fiddly focusing

I had to refocus the lens when I reattached it, and realised a better way to get it in focus. The focus ring of the 16mm lens is not very precise compared with larger lenses, I’ve always struggled a bit with this. I tried a different approach this time.

I set the lens focus ring fully to ‘FAR’ and locked it off. Then released the screw clamping the sensor mounted rear focus ring. That’s a much finer screw thread, it has a more positive movement, and allows really fine focus adjustment. It is mentioned in the official instructions, but I think it’s the BEST way to focus if you’re being fussy.

With this, the ‘raspistill –focus‘ command trick and some patience you can get quite fine control over the focus. You DO need a monitor connected via the HDMI port though. The preview image does not appear through VNC or puTTY sessions.

As always, it’s best to close the aperture ring a little to increase depth of field. I always reduce it to F2.8 so it’s still bright, you can reduce further if you are having problems.

Light pollution

We sit just outside an expanding town which is switching to LED streetlighting. Light pollution is an increasing problem. I have purchased a simple ‘light pollution‘ filter to add to the 50mm Nikkor lens. I will be testing this as conditions allow, I wonder if it helps, and I hope it doesn’t block infrared!


Other builds

As mentioned earlier, quite a few makes are now underway with the Instructables project. The first ‘I made this‘ post has appeared (well done Jeff!), and from the messages I have seen there are a few nearing completion.

It looks like the most common pain points have been the PCB (see above) and sourcing the motors and worm gear components. Hopefully PCB sourcing is easier now with the published Gerber files. I saw a couple of cases where people shared a PCB order to reduce costs.

For the worm gear, I wonder if a future design could switch to using planetary gearboxes on the Nema17 motors instead. They seem to be more widely available, and can even be purchased as complete units. They may require a rethink of the drive mechanism, I have ideas already.

At least one builder is improving the weatherproofing of the design, that will be exciting to see when it is ready. I think there is a lot to learn from that development if it happens.

There are a couple of really interesting alternative motor controller designs out there too, including alternative ways to power/reset the Tiny2040 as well.


Off the shelf motorcontrollers

I mentioned in December that I haven’t found a suitable off-the-shelf motor controller board yet. Well, in the tech world, a month is a long time. I recently came across an announcement from Pimoroni about their new YUKON board. The specifications sound interesting. It supports high current stepper motor drivers, has a modular design and an onboard RP2040. There’s a fully featured builders kit available, but you can also buy the bare board and individual driver components. Pimoroni’s website has a couple of overview videos, and there’s an example robot project on Youtube by Kevin McAleer. I’d like to try one of these at some point, IF I find the time. Maybe someone else will give it a try?

Pimoroni's image of their new Yukon robotics board. The specification sounds really useful for telescope control too.
Pimoroni’s website image of their Yukon board. Haven’t tested it yet, but the specification sounds really useful for controlling the telescope motors without having to build your own PCB.

So, quite a bit to test now. Here’s hoping for some clear, calm, dark skies before the winter is over!

April 2023 update

Build instructions

Life got spectacularly busy over the winter so the build instructions for the telescope are of course behind schedule. But I’ve been chipping away at it.

I’ve made some very amateur build videos which I’m currently editing and getting on to a draft Instructables page for the project.

I’m also slowly digging into GitHub for making the software and installation scripts available publicly.

Rotten weather

It’s been another winter of incredibly cloudy nights, including missing out on some fantastic aurora displays recently. Astronomy really requires some calm acceptance of what the universe throws at you doesn’t it.

When I have been able to work on the project I’ve been testing, testing, testing. In bad weather or daylight it’s hard to capture images of course. The software can simulate basic images if you cannot see through the clouds, that feature was originally just a learning exercise, but it’s proven really useful this year.

Improved tracking

Recently we had a rare incredibly clear and still night, the best I’ve seen in years, but as we are now into spring skies the objects I have been photographing are disappearing into the west. And of course a beautiful full moon was slap bang in the middle of some new potential targets!

Anyway I decided to exercise the tracking features of the telescope on some new areas of the sky, and found a unusual behaviour when I use ‘astroalign’ to detect and correct tracking errors. My attempts to make the tracking easier had actually been confusing it in some circumstances. I am very grateful to Martin Beroiz for his quick response to my question and learned some valuable improvements I can make to my tracking strategy.

Different lenses

In the interests of ‘I wonder what will happen if…’ I also swapped out the 16mm lens for a 50mm lens. This gives a much narrower field of view and takes the telescope to the limit of motor precision. I’m not going to redesign this version, but it’s been a useful exercise to finetune a number of features. If it can work well with the 50mm lens, it should be even more solid with the intended 16mm one. The telescope is still designed for the 16mm lens but this opens up the lens choice a bit.

The first 50mm lens I received had some slack in the mechanism, it took me a few nights to realise why I couldn’t focus it properly and why all the stars looked like comets..but customer service at The Pi Hut was fantastic and I soon had a replacement in hand. Anyway it looks like the telescope works with lenses between 16mm and 50mm. So that’s a range of about 5° to 20° field of view.

Many Messier objects are quite small with the 16mm lens, so longer lenses make more targets viable.

Infrared Filter

I’m still looking for some insight into the benefits of removing the IR filter on the HiQ sensor… I’ve not found any clear confirmation if it’s a real improvement for astrophotography yet. My limited understanding of the documentation suggests that it may not increase detection very far into IR anyway. I think I will wait before hacking the filter off, maybe it’s an experiment for when I retire an earlier build of pilomar that I still occasionally use. (Yes; I have 2 in action. No; I have no idea how to do interferometry with them 😀)

Hardware shortage

The shortage of Raspberry Pis has been a bit of a frustration, I’d like to swap out for a larger memory version but that’s going to have to wait. And maybe now it’s better to wait to see if a RPi 5 comes along one day. Having said that, keeping things running nicely on more modest hardware is good. I wonder if I could still get it running on a PiZero again??? A really stripped down version should still work, I think the working memory used by the OpenCV image arrays may be the challenge.

Other features

I’ve also experimented with some other features and have now decided which ones to keep and which to drop.

Memory cards

During the winter under heavy testing I noticed that the RPi 4 at the heart of the telescope suddenly slowed down significantly. After investigation I figured that the SD memory card was struggling, having saved/processed/deleted a great many thousands of images the card was starting to fragment or corrupt in some way. The SD card is the only storage for everything on the telescope. A simple solution is a quick rebuild of pilomar on a new SD card, but I guess this problem will keep occurring in heavy use. So the software now supports USB memory sticks as alternate storage. If found at startup pilomar will save images to the memory stick instead. If there is no memory stick, it stores everything on the SD card as before. The advantage is that the USB memory can be much larger so you can gather more images before offloading for stacking, AND you can now transfer images to other computers for processing by simply moving the USB stick across. The plug’n’play support feels a little shaky especially if you are running headlessly, but it seems to work reliably for me so far.

Live image stacking

I played with live stacking of the images too! I thought it would be too complex for my limited understanding but decided to give it a go anyway and see what I learned. It has some potential, I got closer than I expected to a working solution, but have decided to drop that feature for now too. Live stacking would be very interactive, but using a dedicated image stacker offline gives better results. So that experiment is going in the bin too.

Lens distortion

I studied lens distortion a bit as part of the livestacking experiment. The 50mm lens has very little in practice, but there is noticeable distortion at the edges of 16mm images. But the distortion was not generally enough to justify extra complexity yet. Dealing with it may enable some other features further down the road, but nothing serious at this point. So that’s going in the bin. It may return if I go back to live image stacking one day.

Observing conditions

As a result of the poor observing conditions recently I also added a weather monitoring feature. When you are sitting inside and the telescope is outside, it’s handy to know roughly how the conditions are developing. I use the API from metcheck.com The telescope is not measuring local conditions directly, but using hourly forecast data from Metcheck. There’s almost too much data available there, but it’s useful for planning and monitoring observations; so it’s going into the official version.

Metcheck.com provide a JSON API for various sets of weather data, including an astronomy biased dataset. I’ve found it really useful. Here’s an example for Aberdeen.

Quality control of images

I have wondered about detecting clouds in images so that they can be removed automatically, but haven’t solved that yet. However the software will now detect meteor/satellite/aircraft trails using some OpenCV line detection routines. That’s staying in the software. It’s useful to ignore unwanted images with trails, and also to spot meteors if you’re just capturing a meteor shower!

User interface

Pilomar currently uses a simple character based UI. The observation dashboard uses a small home grown character library that I made for other projects. There are several other UI frameworks available but so far I’ve considered that quite low priority to integrate.

I recently experimented with adding a simple web interface. It would be great to operate it directly from a mobile phone for example. I got a simple live feed of the latest image onto my phone, but it’s quite a ‘can of worms’ to make it really slick! I think at the moment it’s still too big a distraction and potentially better left for a full rewrite of the software sometime, so the software remains as ‘character based’ only for now. A web interface may be useful if I ever convert the telescope to being fully mobile… maybe running it all off a 12V car battery so I can get to remote dark skies. The current UI has some basic flexibility for terminal window sizes anyway, so could be run on smaller devices through a terminal emulator I guess. Need to try it.

Finally

So that’s it, been a busy software development winter, with very few chances to make observations. But I’m on the final push now to get the build instructions published. Nearly there!

September 2022 Update

It’s almost a year since I posted an update, so I thought something was due. I spent winter 2021/2022 using the Pi-lomar telescope, then used the lessons learned to make further refinements.

The main changes have been with the software, a few months of real life usage always reveals bugs and improvement areas. Of course there are also new ideas to explore too.

The brain

  • Raspberry Pi V4 (still)
  • A new O/S version is available based upon Debian V11 – Bullseye
  • I tried to build a new version based upon this, but found several showstoppers at my level of experience. The camera system has changed, and several of the packages that Pi-lomar depends upon are not yet happy in the new O/S. So at the moment I’m restricted to the previous Debian V10 – Buster build. The new camera support in Bullseye is really interesting, but will have to wait until I can get the basic functionality working.
  • I’ve done some work to simplify the build script on the Buster build as that’s now obviously stable and the very latest release seemed to simplify the build requirements a lot.

The muscles

  • The motor control has proven quite a challenge to get to a level I’m happy with. But finally it seems I’ve moved forward here.
  • Originally I drove the motors (via drivers) directly from the RPi itself. But a linux O/S is not ideal for realtime motor control. So a long project began to introduce a microcontroller to handle the motor control. I started with the Raspberry Pi Pico using the RP2040 chip. Oh boy! Did I have problems here. No matter how simple the task or how I chose to power it I had problems with random resets. I got through 4 of these boards before trying something else. I switched to the lovely Adafruit Feather RP2040 but had similar problems. I switched again to the great little Pimoroni Tiny2040 and the problems vanished, but I had fewer GPIO pins to work with, so some reworking of the CircuitPython code was needed.
  • The first ‘wiring’ of Pi-lomar was originally a real ad-hoc construction directly off the RPi GPIO header. It worked but looked crazy.
  • The second version with the flaky microcontroller first attempts was a little neater using a prototyping board.
  • When I switched to the Pimoroni Tiny2040 I also played with designing a custom circuitboard for it. A friend recommended trying EasyEDA as an online designer. After a few weeks learning that and playing with options I had a design that I figured was worth trying. I cannot overstate how impressed I was with the process to convert my amateur thoughts into an actual circuitboard.
  • I downloaded the ‘Gerber files’ from EasyEDA, sent them to PCBWay, and less than a week later had 5 bare boards on my desk. Manufactured and shipped from China to the UK in under 7 days. Amazing.
  • I’m building a 2nd copy of the telescope now to use for the construction video and it will incorporate this new board hopefully.
  • Anyway I have a CircuitPython routine now which will run nicely on a microcontroller and can handle motor movement independently of the RPi. The RPi can now just concentrate upon high level activities. User interaction, Planning and photographing things.

The software

  • The software on the RPi is still Python 3 based.
  • I’ve produced two versions now. Pilomar-lite – which I intend to publish, and Pilomar2 which has more functionality, but is also a playground for new experiments.
  • Both versions are ‘amateur’ builds, they could certainly be written more cleanly, but I’ve favoured functionality over beauty here.

Pilomar-Lite

This has all the functionality that the original version of Pi-Lomar had, with many bugs ironed out. It seems to work quite reliably, but hasn’t been tested by anyone else yet. That’s always a more thorough test!

Pilomar2

This has several extra experiments, most of which will probably die eventually, but some may prove useful. I’m currently trying to teach it to detect and compensate for lens distortion. There are some well established ways to do this using OpenCV, but they all require some extra steps and may prove complex to do in practice. So I’m playing with the idea that the telescope can gradually learn the distortion itself from actual observations and build its own compensation solution…. the telescope works fine without this, but I want to be able to analyse a captured photograph and identify the actual objects in it more accurately in the future. If I get it to work, then other projects are possible. It may also help me to get the ‘image stacking’ working in realtime on the RPi directly one day. Image stacking is still done offline on a separate PC by downloading the individual images captured by Pi-Lomar.

Build instructions

A couple of people have shown interest in getting build instructions for the telescope. I must admit I thought this would be simple to do, but haven’t come up with a quick, clear and easy way to do the whole cake yet. I’m currently tackling this a couple of ways.

For the RPi build and software installation, I’m simplifying as much as possible what needs to be done.

The 3D printing files (.stl files) have been refined, and are pretty much ready to download, the V2 build will also verify that everything still goes together properly. And of course refreshes my memory of how I made the first one 🙂

For the physical telescope build I realised that writing a manual is quite tricky to do, I’m clearly not as good as IKEA at this! So I’m currently considering just making some simple videos to show the construction process. That’s faster to make, and faster to view … I hope!

A simulation of the motor driver board. Home to a Pimoroni Tiny2040 to handle the DRV8825 motor driver chips.
… and an actual one…

Pi-lomar

Pilomar overview

Preface

As a kid I was fascinated by the (then) large telescopes at observatories such as Mount Palomar, and now even they seem modest compared with the plans for the newer even larger telescopes appearing around the world. In 2012 when the Raspberry Pi was launched with its little V1 camera I wondered if it would be possible to do ANY useful astronomy with something so tiny. I followed some online instructions and built a little ‘Aurora Alarm’ for fun, then went no further until the Covid19 pandemic brought much of normal life to a standstill.

What to do with all this extra project time? I tinkered a little with stepper motors and working out how to drive them from Python on a Raspberry Pi Zero. The Raspberry Pi Foundation recently launched the Hi Quality Camera and a more powerful lens to go with it. What to do with a nice compact camera and my little routine for driving some stepper motors?

The main driver was to have fun learning … I just wanted to tinker and see where I ended up. The objective was to see how simple and small an astronomical device I could make on a slim budget. In practice there have been 12 iterations of Pi-lomar so far. V1 was a block of wood with a couple of tiny motors in it. Then it was adjusted with some simple improvements, tested, and improved again… Now a year later I am praying for some clear skies because I seem to have something roughly working! Though let’s call it a ‘proof of concept’ …. Tinkering never finishes.

But that tinkering has taught me some basic skills with Fusion360, 3D printing, Python, gears, bearings, and dusted off decades old lessons in trigonometry, astronomy and simple electronics.
Making stuff is great! Though I’m still terrible at all of the above…

During the pandemic in the UK, the phrase “Do it badly” has been popular. It doesn’t matter that it’s not perfect, just do SOMETHING! So this is what I’ve done so far; badly…

@Short_Bus_

24.Feb.2021
(Revised Mar.2021)

What is Pi-lomar?

It is a Linux controlled camera pretending to be a telescope. The camera sits on a couple of stepper motors that can move it to point in any direction I choose. Then it just takes photographs!

Let’s pick that apart…

The Linux controller is a Raspberry Pi 4 with 2GB of memory. It’s running a home-grown Python 3 routine that coordinates the motor and camera activities.
The camera is the Raspberry Pi Hi Quality camera with the recommended 16mm telephoto lens attached.
The stepper motors are 12V NEMA 17 0.9degree motors. They can move to 400 different positions in a circle. The motors are connected to a 1:60 ratio Worm Gear, giving 400*60 = 24000 positions that I can point each axis in. So the positioning can be given to 0.015 degree precision (in theory!) The motors are controlled from GPIO pins on the Raspberry Pi via a couple of DRV8825 stepper motor driver chips. The DRV8825 chip is a great way to convert the RPi GPIO signals into the 12V pulses that the motors need to move and hold their position.

The whole thing sits in a technically unnecessary and probably too heavy 3D printed body that looks a little like the observatories that fascinated me as a kid, but I like it, so it’s staying J Can you guess which observatory inspired the look?

How is the body made?

The body is made on a 3D printer, using PLA plastic. (I actually use Technology Outlet PLA-Plus) There are quite a few components that need to be printed and assembled. It takes quite a long time to print them all, this cannot be printed in a single weekend unless you have a very impressive printer farm.

The mechanical parts, bearings, bolts, gears, shafts etc are all budget items sourced online. They are not free, but can be purchased a little at a time without breaking the bank. A slow 3D printer is your budgeting friend here J

How does Pi-lomar operate?

The python routine is simple in concept, it only looks complicated and messy because I wrote it. I learned very quickly that the combination of Raspberry Pi, Linux and Python gives access to an enormous library of pre-existing software solutions. Generally when you are writing in Python you are just combining other people’s generous hard work in a new way.

At the high level, the program asks for a target (a planet, a star or a nebula), then it points the camera at the object and takes a photo. It keeps moving the camera to follow the target, and keeps taking photos until it drops below the horizon or you have enough photographs.

The night sky is quite dark, so you need to take a slow photograph to capture most things. Just a few seconds exposure with default settings will show you something, maybe some of the brighter stars and planets, maybe even the faint smudge of a galaxy or nebula. But if you take LOTS of photographs the right way you can use some other magical software that combines all those photographs to produce a single higher quality and more detailed image. I have not worked out how to get the Raspberry Pi to do this ‘stacking’ yet, but it can gather and prepare the photos so that they can be stacked on another computer. Pi-lomar gathers and prepares photographs so that they can be passed to a regular Windows PC running astrophotography stacking software. I use DeepSkyStacker. There are many alternatives out there.

How does Pi-lomar know where to look?

There is a wonderful astro library called skyfield_py developed by Brandon Rhodes. It performs very useful calculations very efficiently and with an accuracy that I can only dream of. If you want to know where the moon is right now? Ask Skyfield. If you want to know when Mars will rise? Ask Skyfield. If you help it a bit, it can even calculate satellites and comets. It also takes into account the distortion of the atmosphere and that fact the light from the object does not arrive instantaneously, so you look in the right place.

It is trivial to convert the ALTITUDE and AZIMUTH of an object from Skyfield into the position of the two motors.

How does Pi-lomar take the photographs?

For speed and simplicity in development Pi-lomar just uses the ‘raspistill’ command to take a photograph. This is a useful utility, and has a lot of options that let you control how the photograph is captured. It is not the most efficient way to capture the photographs, but WORKING AND SLOW is better than BROKEN BUT FAST. I will eventually speed this up, I am sure.

Normally we save pictures as JPG, PNG or TIFF format images. But these formats all involve a lot of processing of the raw image that the sensor actually captured. Astrophotography stacking software often works wonders with that original RAW sensor data. The stacking software knows how to handle the messy raw information from the camera sensor to pull out details that would otherwise be lost. So raspistill is used to capture this RAW sensor data. It’s a little bit of a messy process, I would love to see this simplified! But by passing the captured images through another routine called PyDNG we can extract the original raw data from the sensor and save that ready for the stacking software.

The ultimate output of the camera is a folder full of .DNG (“digital negative”) files which look awful to us, but are full of information to the stacking software.

The stacking software also requires some ‘control’ photographs to be taken which help it identify noise and faults in the camera sensor. Pi-lomar will also let me take these, and store them along with the actual astro photographs.

Load all those photographs into DeepSkyStacker (or similar) and let it work its magic. After stacking I usually adjust some image settings for clarity, then save it. I can further tweak them in GIMP or some other image package.

How does Pi-lomar know where it is pointing?

It doesn’t! I could have position sensors of some type that would tell physically where the camera is pointing. But this is a low budget project, AND I wanted to keep the complexity low. So Pi-lomar operates by ‘dead reckoning’ It moves the motors based upon where it THINKS it is pointing. It has logic to ‘remember’ what position it has requested, and this works quite well. But sometimes there may be timing or friction problems and the a motor step may be missed. Remember the camera moves 0.015degrees per step so a single mistake is not significant, but it has to make a LOT of steps in order to move any significant distance.

The two major causes of error I have seen are ‘timing’ issues with the control signals, LINUX is NOT perfect for motor control, it can unexpectedly pause to do other tasks outside your control. This might cause poorly formed move signals. There is also some friction in the entire device, the observatory dome is as heavy as the underlying bearing can handle, sometimes it may not move as smoothly as hoped, there is also some ‘slack’ in the gearing, if the platform changes direction it might take a few extra steps to recover motion properly in the new direction. (I’m currently trying to improve the gearing and the control of the motors to make things smoother.)

Sounds awful, can it be solved? Skyfield to the rescue again. Skyfield tells where the target is, but it can also tell where neighbouring stars are. The 16mm lens has quite a wide field of view (about 20degrees), so even quite large position errors probably still see SOME of the expected stars. With help from Skyfield (and OpenCV) I create a simulated image of the sky. The REAL and SIMULATED images can be compared to check for errors. Another fantastic library called astroalign will even do the comparison for you! Given two images astroalign tells how they differ. I convert that difference into ‘tuning’ instructions for the motor. In effect Pi-lomar can roughly auto-correct itself using this astroalign check periodically detect movement errors.

Astroalign will probably be useful again if I manage to take the final image stacking on-board the RPi someday too. The tracking is not fast or 100% accurate, but again the wide field of view allows us some flexibility here, so it’s certainly good enough for this project.

What software components does Pi-lomar use?

This is what I include in my build script at the moment hopefully some can be cleaned out eventually.

  • First I thoroughly recommend that you install the full Raspian operating system, including the desktop support and standard applications. Pi-Lomar is designed to run over an SSH connection with a terminal interface at present but I found many dependencies needed adding back in if you try to run Pi-lomar on the more basic ‘headless’ installation.
  • Skyfield_pi. (For astronomical calculations.)
  • OpenCV (for image generation and processing the captured images.)
  • PyDNG (for stripping out the RAW sensor data from the camera.)
  • Raspistill (should be already part of the basic operating system installation)
  • Python3 (components didn’t play nicely together in the Python2 environment.)
  • Numpy 1.16.5 or later (and potentially matplotlib behind the scenes)
  • Pandas (Data analysis library)
  • Astroalign (calculates alignment differences between images)
  • And a whole bunch of other dependencies gradually appeared during development.
    • Scikit-image, libwebp-dev, libtiff5, libopenjp2-7-dev, libjasper-dev, libqtgui4, libqt4-test, libhdf5-dev, imutils, libilmbase23, libopenexr-dev, libavcodec-dev, libavformat-dev, libswscale-dev, libv4l-dev, libatlas-base-dev

NOTE: Already by March 2021, the dependency list was different and more complex when I tried to build a 2nd RPi to develop the next version. As all packages develop it is a constant risk that some version conflicts are created. Patience is required here!

  • Then on the PC. An astrophotography stacker (DeepSkyStacker or similar) and an image editing program (Gimp or similar). Also a good SFTP tool is handy, there are a LOT of images to transfer if you have a good observation night!

Current challenges

  • Friction was originally a worrying issue. The main bearing for Pi-lomar is a budget Lazy-Susan bearing from E-Bay! A high quality industrial bearing of the same size would be at least 10 times the price. I’m using computing power to overcome the limitations at the moment, but I have ideas how to physically improve things in the next version without adding expensive components into the design.
  • Accessing the RAW data from the camera sensor is not pretty. I would love to see this improved. If I can find a simpler pipeline I think even some stacking and processing of the images could be handled onboard in realtime. Not sure how yet, but that’s the dream. I’ll probably convert from raspistill to picamera or libcamera eventually.
  • Very long exposures. The Hi Quality camera will support exposure times of 200seconds. I’m currently only using much shorter times, 200 seconds requires good tracking precision and separating the motion and camera functionality into separate threads. At time of writing, this is my focus. That might force fundamental changes across the project still…
  • Weight! The body is quite heavy, even for a 3D printed item, this increases friction and demands stronger motors and reduces speed! Early versions of Pi-lomar were small and light, they could keep up with an ISS overhead pass, the current versions don’t. A weight loss program is required for future versions. However weight also gives stability, it’s a fine balance!
  • Electronics. I need to refine the home-made ‘hat’ for the Raspberry Pi which would simplify the wiring connections within the device. You can survive with breadboard wiring at first, but it’s not very robust in the long term. There are some issues to overcome with signal noise on some of the GPIO pins before they are initialised. These can disturb the motors, so currently there is a very specific startup sequence to follow! I want to resolve this so that it’s simpler ON/OFF. (Currently experimenting with the new Raspberry Pi Pico microcontroller here)
  • I would love a deeper understanding of the skyfield_py library. I’m pretty sure that it can do some of the calculations I’ve struggled with, but I’m not confident enough with it.
  • The lens is not the same quality that you would get with traditional SLR camera lenses. The sensor will take larger lenses, but they will add to the weight and may require a redesign of the gearbox too. However it’s clear that the design would adapt to take other lenses of higher quality and power. The 16mm lens is also difficult to get perfectly crisp. In daylight you can get a fine focus on images, but for astro photographs it’s very fiddly to get it REALLY crisp. I am experimenting with a small 3D printed Bahtinov Mask to see if that will help with this. It’s possible that the motors may generate some vibration too, but I’m not sure yet. If I stick to the original concept of ‘cheap astronomy’ then fitting larger lenses is not in the spirit of the project at the moment.
  • Axis alignment. For very long exposure photographs the structure needs to be converted to a ‘Polar aligned’ mount. This will reduce the rotation of images as the sky passes overhead and allow more precise alignment of them as they are combined. It will also simplify the motor control for tracking, but introduce a new challenge to keep the dome opening aligned. The principle is simple, but I have to redesign quite a bit of the structure and rethink the drive system. That may be 2 versions further ahead J
  • Weatherproofing. The dome provides quite good protection from dew forming on cold nights, there is some heat given off by a RPi 4 and the stepper motors. It seems to keep the interior of Pi-Lomar above the dewpoint. But it is definitely NOT weatherproof. Ultimately I would like to design a properly weatherproof housing that will allow the telescope to be mounted outside permanently. Then I can combine it with the aurora alarm I made years ago and automatically photograph the aurora too if it ever appears here.

Next steps

Pi-lomar is ‘just’ freshly working. I’m only now satisfied that something useful is achievable. I have quite a lot of finetuning and experimentation to do. I need to find the limits of quality and precision still. As a result the current software is full of ‘debugging’, ‘logging’ and ‘experiments’. I am still discovering new Python features and libraries to improve or simplify the solution. As every programmer says every day… “this needs a complete rewrite” – – must….. resist……

I suspect that there is at least one more design of the body to go through, which will have the polar aligned mount, reduced weight, reduced friction and better motor control, that may take some time to finalise and has to wait until ‘observing season’ is out of the way.

It would be very easy to increase complexity and cost in order to improve quality further. Costly motor controllers, expensive bearings, more processing power…. But they are not the target. The idea is ‘cheap astronomy’. I want to take interesting photographs, not design the perfect telescope in THIS project.

Where are the designs published?

I’ve not published them yet, I’m currently checking the limitations and reliability, and ensuring that it can photograph enough items to be useful. I’m hoping to refine a new Pi-Lomar body based upon those lessons, then I’ll publish a cleaned up example program and the STL files for the 3D printing. Then smarter people than me can improve it further! That means learning how to use something like GitHub too I suppose J

I cannot wait. How do I start anyway?

I bet you could build something out of cardboard if you are in a hurry! You don’t need anything clever to get started. The key to everything is learning how skyfield_py library works on a RPi. You can achieve a lot if you can just point a camera at something in the sky and take a photograph with raspistill! Everything else, including connecting to the stepper motors is described online.

Lessons learned

  • If you find yourself writing complicated tasks in Python. STOP! Spend a couple of days searching and you will usually find a far superior solution already exists as a library that you can import.
  • Script everything. You WILL make mistakes, want to start again, if you are really lucky you’ll probably even blow stuff up sometimes. A complicated project is impossible to remember in detail, so make sure that you can BUILD your solution from scratch in scripts. Maybe even build it into the program itself, every time you run it, the program first checks that everything’s OK. Folders exist, access rights work, modules are installed (Python will do that last bit spectacularly anyway! J)
    Be prepared that even your carefully constructed build script will break anyway, dependencies change as packages update.
  • If you REALLY cannot find the fault in your software, check the hardware! I recently lost a few days trying to debug comms problems, only to ultimately find that there was a faulty board.
  • Don’t care about performance at first. Care about functionality. If the functionality is right, performance can be chased afterwards. If your idea doesn’t work at all, its better to find out early. When combining lots of libraries, there are many combinations that ‘eventually’ don’t work. You may get a long way into a solution before hitting a problem. It is horrible when it happens, but it DOES happen so have the energy to try a new path instead! Your software will NEVER be perfect, aim instead for WORKING!
  • DO IT BADLY
VERY early prototype… learning how to control simple stepper motors with a Raspberry Pi Zero.
Bigger Raspberry Pi, stronger motors and camera tests
Developing the dome body and internal structure
Designing the components for 3D printing