February 2026 update

Been a while since I updated the blog so thought I’d post some updates on the project.

Peace and quiet, a TMC2209 story

The original project uses DRV8825 stepper motor drivers. These were the most commonly recommended drivers when I was first learning how to drive motors from a Raspberry Pi computer many years ago. They are simple, but they are noisy. If you are young you can hear them whine at higher frequencies! The signals they send to drive the motor coils are quite rough. So the project works, but I was uncomfortable with the noise especially when making large moves. You can reduce the loud mechanical noise by switching to microstepping, but the high pitched whine seemed to be a permanent feature.

A couple of people in the pilomar project group on Discord have recommended (and used) TMC2209 steppers instead. I am grateful to those alternative builds, it gave me some initial pointers to how to convert the project to use TMC2209s.

TMC2209 boards are a similar size to the DRV8825 and almost the same layout, but more sophisticated devices. They have more options and most importantly much smoother control signals!

Original DRV8825 on left, BigTreeTech TMC2209 on the right.

One big advantage of the TMC2209 is that you can perform more functions AND more configuration tasks directly with the chip via a UART channel. The DRV8825 requires potentiometer adjustments and individual pin signals for everything. The TMC2209 does not which may make it easier for people to get the project up and running initially.

The challenge however was the UART communication! The original design uses Pimoroni Tiny RP2040/RP2350 microcontrollers and there are not enough pins available to add the extra serial communication lines needed. I needed more GPIO pins!

Tiny2350 vs Pico2: At last I have enough GPIO pins!

Very early in the project I had rejected the Raspberry Pi Pico RP2040 because it had proven too unreliable, with constant random resets that I could not eliminate. Now there is the Pico2 RP2350 available and after testing that the reliability is much better. So a large rewrite and restructure of the microcontroller was started. That took all of my spare time throughout 2025 – which explains why there have been no blog posts in 2025 either! Thanks go to Thomas Proffen too for kick starting the code restructuring for the microcontroller.

But after a lot of digging, learning, programming and testing I think I have a TMC2209 solution ready to roll back into the project. So far the results have been astounding, when I first ran the new solution I thought it was not working at all, the move commands resulted in no response from the telescope. After some panic I realised that it was in fact working but was so incredibly quiet and smooth that you couldn’t tell it was moving.

The TMC2209s have other tricks up their sleeve for future development, it looks like they can perform some actions even more autonomously than the DRV8825s, so there may be some more features to introduce still. But I’m really happy so far with the conversion. The telescope will finally sit and work on the desk in front of you and you’ll not notice!

I desperately need some clear calm nights now before the lighter nights arrive to really test the new board on some real observations. I want to verify that all the engineering changes are stable as much as possible before releasing the new
versions.

Some key points in the latest design

  • The motor power monitoring is back. You can now measure the motor power voltage again. This is useful when troubleshooting and also when running from batteries.
  • The board now hosts a Raspberry Pi Pico 2 (RP2350) with more stability and more capacity.
  • The DRV8825 sockets have been reworked to support the TMC2209 instead.
  • There are more GPIO pins available on the Pico 2, so more have been exposed around the board edge for future expansion.
  • 5V, 3V and 12V power points have been added to drive fans or other ‘light load’ accessories.
  • The Pico2 I2C channel is made available to support position sensors and other addons.
  • The ‘reset transistor’ circuit is eliminated for the microcontroller. This was to work around the lack of a reset pin on the Tiny2040. The new design supports a USB cable during operation better than before and the voltage supplied to the microcontroller is much higher now.

Another PCBWay order

By changing the microcontroller and stepper driver I had to revisit the PCB design again. I had already converted the project from EasyEDA to KiCAD in an earlier iteration, so it was time to re-open KiCAD and start adjusting the board design.

I had a new batch of the boards produced by PCBWay.com in China just before Christmas. The boards arrived as always remarkably quickly. I had them on my workbench less than a week after submitting the order. The speed of delivery alone is a great time saver in the project. I couldn’t make such a good quality board at home, and certainly couldn’t make a bunch of them at home as quickly as they can. These manufactured boards are also cheaper than the protoboards I was buying before! They look great, almost as if I know what I’m doing (I confess I’m still nervous!). I ordered the new ‘matt black’ finish on the PCBs and they look fantastic. I’ve used a different PCB color for each version of the design, but I think I’ll stick to the matt black from now on.

The new Matt Black PCB finish looks really neat.

I always assumed I would be building the project with protoboard circuits, but I love the compactness and reliability of a professionally manufactured board so I’m not going back now. My current development route is :

  • 1 – Breadboard : To learn what connections and components are needed.
  • 2 – Protoboard : To test reliability and operation.
  • 3 – Manufactured PCB : For reliable and compact deployment.
Which do you trust most? My protoboard mess or a manufactured PCB?

Having the boards made by PCBWay’s industrial process also increases the reliability of the circuits, I have less to worry about when assembling the final components. I typically order a batch of 5 or 10 boards so I have enough for my working builds plus a few spares for further experiments or repairs.

I usually only publish the gerber files in the GitHUB project but I see I can also create a kind of template order on the PCBWay website which would be fully configured and ready for anyone to place their own order too. That looks like a handy feature for people who are not confident in building or ordering manufactured PCBs.

I’m currently ordering unpopulated PCBs for development flexibility but I’m wondering about trying their more complete assembly service eventually too.

Testing went well, although I found one minor routing mistake in my KiCAD design. Luckily I was able to solve it with a couple of jumper wires, I didn’t even have to hack the board.

Application development

I’ve continued restructuring the code, but of course like all projects it’s also becoming larger as more features are added and more edge cases are handled by the code. So splitting into cleanly defined modules is sensible. Isolating the logic from the user interface more cleanly will also open the door to alternative UIs in the future.

Installation is still the same procedure as before, but there are now more files to transfer when you set up the project. The microcontroller code is very specific to the Pico2/TMC2209/PCB combination so I have split this into a separate circuitpython subdirectory in the github repository.

When using the TMC2209 drivers the software can now handle more of the driver configuration programmatically which makes it easier to get up and running. You don’t need to adjust potentiometers, measure voltages or calculate anything manually to get the motor running properly. Part of me wishes I knew about the TMC2209 at the start of the project, but the other part of me is relieved that I started with the simpler DRV8825s!

AI assistance

I started experimenting with generative AI to help with some programming tasks. Initially out of curiousity, but I soon noticed that it was very useful for a couple of areas.
1) Generating boilerplate code, simple routines that took time but were easy to define and verify.
2) Generating efficient code using algorithms, tools or techniques that I was not confident with.

The first case is relatively simple, as long as you know what code you are expecting and are confident to verify that it is correct and safe to use, it can generate some routines for you rapidly. I always have to go and adjust the code a little to fit with the project better, but it has been a timesaver a few times.

The second case is more interesting. There are two tasks where it helped me significantly.

I wanted to do some complex sky projections of data (see the aurora utility below), I managed to do the projection using my very basic trigonometry skills, but the result was SLOW and I knew for certain that there would be much faster ways to do the job. But I don’t have experience with more advanced transformations and transpositions of arrays of coordinates.
So I defined what data I had, what I wanted to achieve and which tools I wanted to use (eg numpy) and started an iterative development cycle with Microsoft’s copilot. It took some time and I had to restart the discussion three times, but I eventually learned how to discuss and get some useful code from it. The risk here is that I still don’t understand fully some of the calculations that it is performing – but I can verify the input and output data using my original code. You can of course continue the discussion with the AI to get more understanding of the code it wrote, it can make a patient tutor at times. The new code produced a significant performance improvement, my projection calculation went from about 90seconds to about 1 second.

Another task was to do with image cleaning. In the pi-lomar project I have some OpenCV filters, routines that I have written which perform various cleaning or enhancement tasks on images. They are mainly used by the target tracking to clean up an image of the sky, enhance the stars and eliminate any pollution or haze. Up until now I have been doing this by researching online and trial-and-error programming to get the results that I want.

I found that by defining the problem well with copilot I could get it to write code to do the same cleaning efficiently. What was really useful is that copilot can analyse your example images. I described the camera sensor and lens I was using, including that the IR CUTOFF filter was removed. Copilot then made some reasonable analysis of the images and what issues they could have. It then made some efficient OpenCV/Numpy routines to clean the images. This is still an iterative loop but opened a whole new way to solve future image handling problems. Once more I feel confident to try this approach because I can test input/output to see if I get the results I expect.

I have witnessed some truly weird bits of code being generated, so it’s critical that you understand what you are asking for, take time to explain it clearly to the AI, and definitely deeply study the suggested solution. Even if the result is wrong, sometimes you still learn new things though!

Utilities added

METCHECK.COM weather forecast

When planning observations it’s useful to know if the conditions will be right.
Here on the UK coast of the North Sea we don’t get many perfect nights so you don’t want to miss them when they come. There are lots of websites and phone apps for the weather, I use a few of those to plan the week ahead. When the actual night of observing comes I switch to a live data feed from https://www.metcheck.com/BUSINESS/developers.asp

For my location I have found this weather data feed really useful and generally accurate. METCHECK.COM make a weather forecast available for any latitude/longitude on Earth. You can download this as a json file from their website. The file is large and difficult to read in the raw json format so I made a terminal interface to show the information in a table. I have included this utility program now in the github repository for the project.

METCHECK.COM data viewer, demonstrating that it’s pretty cloudy here.

You can open a terminal window, go to the src directory and enter

python pilomarmetcheck.py

This command will display a table of colorcoded weather forecast information.
Each row is a different weather measurement. Each column is a time into the future. The wider your terminal window, the more columns can be shown.
The utility refreshes automatically every few minutes to keep the information up-to-date. The same source code src/pilomarmetcheck.py provides the class metcheck_handler() which you can use in other programs if needed.

from pilomarmetcheck import metcheck_handler

Check in src/pilomarmetcheck.py to see how it is used to extract and display the data. The metcheck_handler class is also included in the pending version of src/pilomar.py so that weather conditions can be monitored and also recorded in image metadata.

NOAA Aurora conditions

I hate to miss an aurora display, we’ve had some very spectacular displays during the current solar maximum but our notoriously cloudy nights mean it’s difficult to actually catch them. I noticed that all the online aurora visibility maps show the data from NOAA.GOV of the ‘ovation data‘.
This is a map of the likely intensity in the next 30 minutes for each lat/lon on Earth. If I understand correctly this is the ‘probability’ of seeing the Aurora directly overhead. But the aurora is usually 100km high in the atmosphere, so even if it is not directly overhead you may still be able to see it overhead a nearby location.

Can you see the aurora if it’s lat/lon is over the horizon? It depends how high it is.

So I wrote a ‘projection’ routine which estimates what the aurora may look like when viewing it from a distance. It is a simple terminal interface which refreshes automatically every 15 minutes. It shows the aurora oval as you might see it yourself. You can see the size of the oval and whether it is above or below
the horizon from your location.

Simple Aurora ovation viewer. Could the aurora be visible where you are?
In this case, no, I probably cannot see it ๐Ÿ™‚

The online alerts are still the best way to START looking for the aurora, but if you have an alert ongoing I use this tool to monitor the movement and latitude of the aurora during the event. It relies entirely upon the probability calculate by NOAA, and there are many other issues that can affect your view of the
aurora, but it was an interesting challenge to program given my poor mathematical skills!

To run it go to the /src directory and enter

python pilomarovation.py

The same source code src/pilomarovation.py provides the class pilomarovation() which you can use in other programs if needed.

from pilomarovation import pilomarovation

Check in src/pilomarovation.py to see how it is used to extract and display the data.

Checking images on a terminal

Here’s a crazy one. I’ve developed the project as a character based application. But it’s really useful to be able to see images as they are being captured sometimes. Normally I use SFTP to transfer the .jpg files off the RPi and onto a PC for viewing. This is great, but sometimes I just need a very quick view of an image. I added a utility to display .jpg files through the terminal interface. It uses the XTERM 256 color palette to display a downscaled copy of the image.
I’ve found this strangely useful! So I’ve included it in the project repository now too.

A .jpg image of a tree rendered on a character terminal.

To run it go to the /src directory and enter

python pilomarviewer.py '[filepath]'

where ‘[filepath]’ is a .jpg filename or even a wildcard filepath. The ‘quote marks’ are important if you use wildcards.

It displays the image in the40x160 terminal window. You can pan/zoom around the image as you need. If the image gets updated on disc the display automatically refreshes. If you enter a wildcard in the filepath such as ‘/data/light_.jpg’ it displays the most recent matching image.

You will be able to launch this from the next pilomar.py program too and link it to the current observation automatically.

Observations

2025 was a terrible year for observations here, the summer was fantastic, but up here too light at night. The winters too cloudy. We have had maybe 3 good viewing nights so far this winter and 2 of them had a bright moon to complicate things. But I managed one night recently where I could test everything nicely for a few hours, and the whole package still works, actually when it works really smoothly you have to find a good book to read.

We’ve missed a few spectacular aurora displays due to weather, but sometimes caught a glimpse when the cloud has broken up. I still want to play with the ‘keogram‘ function in the pilomar software on a good aurora display.

Next steps

When the new release is ‘stable’ I will return to the plate solving problem. I’m determined to get a solution up and running for this, and I think that the AI tools will probably speed up the development for me now.

I’m in the process of updating the Github repository for the project with the last 10 months of developments and changes, hope to have all that published soon including the latest TMC2209 PCB design.

Spring is approaching so already my mind is turning back to more summer development tasks. I hope that some of the other builds around the world are in better climates and getting some good images back!

July 2024 update

Instructables builds

There are now 9 builds listed on the Instructables website, I’ve also had contact from other builders, I guess there are around 20 telescope builds out there. So there’s a small – but amazing – community forming for the telescope now. To create a sort of support group for the project I’ve created a Discord group for people who are building or running a copy of the telescope.

PM me via the Instructables project page for a link if you would like to join.

GitHub for pi-lomar updated

A large number of changes to the package are in the latest release just published on GitHub. There is a list of the changes, and some hints about upgrade options here.

The most significant changes in the new release are

Now runs on Raspberry Pi 5.

The GPIO handling is different on the RPi5, so I had to redevelop and retest the GPIO code to work there. This reinforces the advantages of switching to Bookworm 64bit O/S. The RPi4 and RPi5 both run pilomar happily on Bookworm now. Support for the RPi3B with the old ‘Buster’ build remains, but it cannot support some of the new features in this latest release. If you want to upgrade to the latest version I now strongly recommend a RPi4B or RPi5 as the main computer now.

FITS image handling.

With support from a few people in this project, and also from Arnaud and the Astrowl box project there’s now a way to save .fits format image files. FITS file formats are required by some astro image processing software. The raspistill and libcamera-still utilities will save raw images in .DNG format, but that is not accepted by some software. This new FITS format only works on Bookworm builds because it requires the picamera2 package to be available. You may be able to get this installed on earlier O/S versions, but I think it will need some tinkering. The FITS handling has been done by creating a new standalone Python routine (src/pilomarfits.py) which can be called just like the ‘libcamera-still’ command, but which generates .JPG and .FITS files instead. This is likely to improve further in the future, it’s just an initial attempt to add FITS format.

Smoother motor movement.

A whole bunch of ideas came from other builders, it became clear that microstepping is easy and safe to activate for most people. Microstepping makes movement slower, but smoother and quieter. It required some rethinking of the code on the microcontroller because microstepping generates a lot more motor pulses, a limitation with the clock in CircuitPython became apparent but is now resolved. There is also a ‘slew’ mode available in the latest package. This lets the telescope perform LARGE moves using full steps on the motor – noisy by fast. Then when it starts capturing the observation images it switches to microstepping.
Better microstepping support also means that you can build using the 200step stepper motors now. These are generally easier and cheaper to buy.

Easier configuration changes.

Pi-lomar’s configuration is generally held in the parameter file. You can make a lot of changes to the behaviour there. This latest release has moved even more of the configuration into this one file. However some changes can be quite complex to configure correctly. Therefore the latest software has added a few options to perform some common configuration changes directly from the menus. This ensures more consistent and reliable setup for a few camera options and also for configuring several of the microstepping related issues.

Aurora features.

After the spectacular Aurora displays earlier in the spring, I’ve added some experimental Aurora recording features to the software too. Obviously we now have to wait for some good Aurora displays to fully test this feature, but the basic concept seems to work OK. In Aurora mode the camera points to a likely direction for the Aurora and captures images as quickly as possible. It can also generate a simple KEOGRAPH of the aurora display which may be interesting to study sometimes.

Observations

Well, summer is here now, and the skies are too light, too short and sadly still too cloudy. So no practical observations of anything new to show this time. All the project work has gone into this latest software development round instead.
So I’m now looking forward to slightly longer and darker nights coming in August and September, and hoping that the clouds go away.

What’s next?

I’m currently exploring some modifications to the telescope design.

Now that the RPi5 is supported – it has TWO camera ports! So I would like to explore the idea of having two cameras mounted in the telescope. Ideally a 16mm lens dedicated to tracking, and then a 50mm higher quality lens dedicated to observation pictures. There is also some feedback from other builders which is re-opening the design of the camera tower and camera
cradle. I’m currently thinking to make a slightly wider camera tower to accommodate 2 cameras, and probably reorienting the sensors into portrait mode to improve access for focusing. It may make sense to improve the weatherproofing around the camera boards – as others have already done.

After a chat in the Discord group I’m also looking at adding a permanent ‘lens cap’ to the camera tower. This would sit below the horizontal position of the camera, so that the lens can be parked up against it when not in use. There are a
couple of advantages to this idea. (1) You don’t have to remember to remove or reinstall the lens cap. (2) If the cap is sufficiently dark the camera can take the ‘dark’ control images automatically at the end of each observation.

I have a redesign of the motorcontroller PCB nearly ready, with improved power performance for the microcontroller. There will probably be another couple of improvements made to it, and then I’ll try getting some samples printed up. I considered switching from the Tiny2040 microcontroller to something larger with more GPIO pins, but have decided to stick with the current setup. There seems to be a practical memory limit on the RP2040 chip in the microcontroller, it has around 200K of working memory available to it, and the current functionality consumes it all. I cannot even get the current code to run on CircuitPython 9.x yet, so it’s still limited to 7.2 and 8.2. It may be worth waiting to see if any 2nd generation microcontroller comes from RPi in the near future before finalising the design.

February 2024 update

Instructables builds

There are 5 ‘I made this’ posts on Instructables now for pi-lomar, and a few other builders have been in contact with questions and suggestions over the last two months. I’m really looking forward to seeing what people manage to do with the telescope and get some feedback and improvement ideas.

GitHub for pi-lomar updated

The January issues branch in GitHub became quite a monster, there is a list of all the changes available here. I’ll cover a few of the interesting items here.

UART communication problems

A few people have had problems getting the communication to work between the RPi and the Tiny2040. This has been due to a few different issues, but it became clear that more help was needed to identify where the problems are when communication doesn’t work. If something is wrong in the communication chain you might get an error message, or you might simply get a ‘dead telescope’ which refuses to move. So I’ve added features to detect and complain more clearly if something is wrong, and also a way to monitor and test the communication in realtime.

Software versions

The original build required very specific versions of CircuitPython and the Raspberry Pi O/S. I’ve addressed a few of the limitations now so you can use the most recent copies of both. I’ve now got a telescope running happily with Bookworm 64bit on an RPi4 and CircuitPython 8.2 on the microcontroller. This means you can use whatever the current version is – you don’t have to go looking for archived copies anymore. The released version does not work on the RPi5 yet – I’m going to rework the camera handling for that beast first.

Motor configurations

The original design was heavily dependent upon specific stepper motor designs. This was quite restricting for some because they are not always easy or cheap to source. The new software has moved the motor and gearing configuration into the parameter file instead of being hardcoded. So now it is simpler to set up alternative motors AND you can still take updates to the software without having to repeat your own hardcoding changes.

Removing the infrared filter

In the last blog I mentioned that I had removed the infrared cutoff filter from one of the camera sensors. I had to wait a while for a clear enough night, but eventually grabbed a few shots of the region around the Orion Nebula. It was not a great observing night, there was considerable haze and some random cloud, but I got a few images.

I am happy to confirm that it really made a difference though. New objects appeared and previously faint objects are clearly enhanced by expanding the vision of the sensor.

After stacking and some enhancement in The GIMP, this is the region with the new infrared capability. I was not able to remove ALL the haze, but if you are patient you can reduce it considerably.

For clarity I’ve marked the major items that are now visible below.

There is clearly a colour tint still to these images which I need to play with some more, but there are definitely new details here.

Orion Nebula

There seems to be a larger area of nebula visible in this image. The colour variation is not as good as earlier images but I think that’s something I can still work on.

Flame Nebula

This was faintly visible before the infrared cutoff filter was removed, but it seems to be more clear now. Hopefully when I can gather more images to stack I can pull more clarity from that still.

Horsehead Nebula

With the infrared filter in place you could see a very faint hint of the Horsehead Nebula surroundings, but they were very subtle. You had to know there was something there and then play with image enhancement to get even a slight hint of it. But it is now more clear.

Barnard’s Loop

Orion is surrounded by a large ‘infrared only’ area of gas. I’ve never seen this before in any observations I’ve made, but suddenly it’s there. Barnard’s Loop is to the left of the belt, and although faint, there’s no doubt it’s now detected. The gas cloud extends lower down around Orion too, but in this shot it’s hard to separate urban haze from actual gas cloud.

Urban Haze

This brings me to by current problems, urban haze. There is light pollution and generally poor quality atmosphere around here. I’m not living in a big city but the conditions are visibly deteriorating as time goes by.

The IR image above is taken with the 16mm lens, I have now removed the IR cutoff filter from the 2nd telescope with the 50mm lens too. That also has a light pollution filter added. The brief chance I’ve had to test it suggests that it does indeed make a difference to the haze that’s creeping into all the shots. The question is- does it also reduce the infrared wavelengths too? The next clear moonless night may answer that.

There’s another place where haze becomes an issue. That’s in the drift tracking mechanism of the telescope. Pi-lomar checks its position by taking a live image and comparing it with a calculated image of the sky. It uses the difference between star locations to correct the position of the camera. It’s not perfect, but it works well enough for the images to be stackable. But if there is strong haze in the sky the Astroalign package can struggle to recognise and match stars between the images. You can get a cloud of false stars at the bottom of an image which confuses things.

To work around that I use OpenCV to try to enhance the stars in the live tracking image. Basically trying to reduce noise and enhance just the real stars. This requires tuning some OpenCV filter functions to work nicely with MY particular observing conditions. That’s a problem for people in other locations, they may need to tune the filter functions differently.

So I’ve modified the software to make these OpenCV filter functions into ‘scripts’. You nolonger have to play with hardcoded function calls in the software, you can simply edit the scripts and test them rapidly against your conditions. I hope this is a good benefit for people. I will probably refine the configuration and testing further in future versions. This is clearly an area where a graphical interface would help. An early test of this new feature looks promising when trying to filter out tree branches from someone’s live tracking images. It looks like we can still pull stars out of quite busy and noisy shots.

Next steps

I am not intending to develop the software further now until the summer. The latest update needs to be taken into use and tested in more environments, so I want to limit any new changes to bug fixes or tuning related to that. Spring is approaching, it’s better to spend time observing!

January 2024 update

Testing Pi-lomar on the Raspberry Pi 5

Will Pi-lomar run on a Raspberry Pi 5?

Spoiler alert! No.

Not yet.

The camera and GPIO libraries have changed, but how close is it to working?

Interestingly more of the required packages that made the RPi 4 Buster build tricky seem to be pre-installed in Bookworm now. I only added opencv, astroalign, pandas and skyfield, and they all installed cleanly, no conflicts or special tricks needed.

sudo apt install python3-skyfield
sudo apt install python3-opencv
sudo apt install python3-astroalign
sudo apt install python3-pandas

The resulting build script will be much simpler I hope. I’m still installing globally rather than creating containers because the RPi will be dedicated to the telescope.

The pilomar.py program of course errored out fairly quickly, but with relatively little change I got it up and running as far as the first menu. That includes all the data loading and formatting that has to happen when you first run the software.

Right out of the box I have to say “wow!“, I’m impressed.

For comparison: The 2GB RPi 4B with the 32bit operating system takes about an hour to calculate the Hipparcos catalogue of 100000+ stars. On an 8GB RPi 5B with 64bit operating system, it ran in 25 seconds, so fast that I thought it had failed, I had to speed up the progress messages to prove it was doing something. From nearly 60 minutes down to 25 seconds! In regular use I’d estimate Pi-lomar runs about twice as fast on the RPi5.

It looks like the basic migration should be straight forward, and there is capacity there for extra features.

Raspberry Pi 5 Active Cooling hint!

The official cooling unit is great – it’s very easy to attach to the RPi5. BUT – you can’t detach it. So, if you’re thinking of later putting it into a case or occasionally reorganize things, be very careful.

For example: I like the Pibow cases, but a couple of design choices clash. If you connect RPi+Cooler first: You cannot fit all the layers of the case.
If you connect RPi+Cooler second: You cannot remove all the layers of the case, and the camera connectors become more difficult to access.

Next time I’ll change the little spring-loaded feet for nylon bolts so the cooler can be removed – that’s the fundamental design flaw to me.


Back to the RPi 4B version

Motorcontroller PCB

The published PCB as it arrives from a manufacturer. No components, you have to add those yourself.
Unpopulated PCB as received from the manufacturer.

The first PCB design is done and the Gerber files are now in the GitHub project for the PCB. These files can be used to manufacture the PCB. It still needs to have components added, but the wiring is all set in the PCB itself. Many thanks to Dale, Mark, and Ton for their help with the designs so far.

The published PCB has a few improvements on it.

  • Full ground plane on the underside of the PCB.
  • 12V supplies have more copper too.
  • The unused ADC0 pin on the Tiny2040 is now available in the expansion section for your own use.
  • A number of GPIO pins from the RPi header are now exposed in the expansion section.
  • Some development features (LED and motor power measurement) are removed.
  • PCB connector blocks have been standardised.
  • Printed warning to take care when connecting USB and GPIO at the same time.
  • NOTE: On the published Gerber files, the ‘R1’ resistor is marked with a lower value than these images show. Any value from 320 – 680 Ohms seems to work fine. The lower the value, the better the transistor switches.
Published motorcontroller PCB with components populated.
Populated at home with the necessary components.

I have added a new folder to the GitHub project to contain the Gerber files.

https://github.com/Short-bus/pilomar/tree/main/gerber

The files for the PCB are in the /gerber/PCB-2023-12-14 folder on GitHub. You must generate a .zip file from here to use for manufacturing.

cd gerber/PCB-2023-12-14
zip PCB-2023-12-14.zip *

The PCB-2023-12-14.zip file is the one that you should submit for manufacturing.

The file gerber/readme.txt explains more about the manufacturing specifications you will need to provide when placing an order.

A second PCB design is still in testing at the moment, this one eliminates the separate Raspberry Pi power supply. It adds a buck converter onto the board to act as the RPi’s power source. Everything runs from the motor 12V supply.

Software development

At the end of December I released a few improvements to the software, fixing a few issues that the early builders found. I think people should be taking their first images soon, so I’ve done a little more development in January to help tuning the telescope.

The tracking algorithm works for me, but I suspect that it needs finetuning to individual observing conditions. There are some great sounding locations where people are building Pi-lomar at the moment. So I’ve started adding some simple tools to help get the tracking parameters right. The idea is to show the results of the tracking calculation and the related parameters which can impact how it works. (I must explain how the tracking solution works soon)

Getting the telescope working south of the Equator! I am at 50+ degrees North here, out of extreme caution I put a warning on Instructables and in the software that the telescope might not work if you go too far south. But there is interest to make copies in the Southern Hemisphere. So with help from volunteers I’m looking at addressing some minor irritations with how the telescope behaves as you move further south. It looks like Pi-lomar will work already – but with a movement reset while tracking objects through due North. So the January release will accept Southern latitudes for the home location now and just warn you that it’s still under development.

There’s now a parameter “OptimiseMoves” – when you turn that parameter on the telescope will move much more freely through due North which should eliminate some irritations.

Example of the warning message that pilomar.py generates if the OptimiseMoves parameter is enabled.
Screenshot showing the warning if the new OptimiseMoves parameter is enabled. Should make smoother operation when making observations facing north more of the time.
Diagram to explain the motion differences when the OptimiseMoves parameter is used in Pi-lomar.
The effect of the OptimiseMoves parameter. By default the telescope will not pass North (360/0 degrees), it will reverse back to the other side and resume there. Enabling OptimiseMoves allows the telescope to rotate freely past North in either direction.

I’ve opened discussions on the GitHub site for anyone who wants to join in. When the feature is fully developed and proven to work that will become the normal operation everywhere.

The January improvements will be merged back into the main branch in the next few days.

Actual observations

It has been almost constantly cloudy here for months now. And the back yard is turning to mud, even the dog is reluctant to go out there. Really frustrating! I’m SO desperate to get out and get some more images captured. Nights on the east coast seem to come in three flavours…

  1. Too cloudy, calm, no moon.
  2. Clear, too windy, no moon.
  3. Clear, calm, full moon.

I’m hoping that some of the other builders will start capturing soon, maybe people can share those images too.


Hardware developments

Upgrading mk1

I have two completed Pi-lomar telescopes at the moment. After a break from 3D printing, I’m returning to the earlier build to upgrade it. The drive mechanism now feel less smooth than the Instructables version. That’s a relief! All the tweaks I put into the Instructables version made a difference. So I’ll be tearing mk1 down and testing out some further improvements to the drive and telescope tower. I’ll take the opportunity to widen the camera cradle – it will allow more room for higher quality lenses, and also let me test out the idea of an RPi5 with twin cameras later this year.

Prototype 3d printed part. This is a twin camera cradle idea. When connected to a Raspberry Pi 5 the telescope could support 2 cameras, each one optimised for different purposes.
First version of twin camera cradle. This will need a RPi5, but could run a 16mm lens for targeting/tracking and a separate 50mm lens for observations. (I hope!)
3D printer underway. Printing a new version of the tower walls to make more space inside the dome for more and larger lenses.
Modified design for tower walls under way to make space for the twin camera cradle.

Removing the infrared filter

Finally time to rip that infrared cut-off filter out of a Hi Quality sensor. The official instructions work, it is simple to do. The lens mount comes off the sensor board easily and the IR filter pops out cleanly with gentle push. I have left the sensor exposed, protected only when a lens is attached. I may try to re-cover it with OHP film as suggested as exposed sensors are dust magnets! I put the sensor inside a fresh clean freezer bag to minimise dust when making the mod.

I placed the 16mm telephoto lens on the sensor and stuck it on a tripod just to see what things looked like. Everything has now gone ‘pink’ so SOMETHING has changed anyway!

A very quickly captured image of the sword of Orion. Just moments after removing the infrared filter from a Raspberry Pi Hi Quality sensor. The sky has turned pink, so there is definitely a change in the wavelengths being captured. Just waiting for less moon and less wind to test it properly.
Infra red filter removed from Raspberry Pi Hi Quality sensor. Tripod mounted photo of Orion’s sword with 16mm telephoto lens and moonlit sky. It’s all gone PINK, that must be good right?!?

It’s not clear how wide the HiQ sensor’s infrared sensitivity is, but I think any expansion of wavelengths will be interesting to play with.

Fiddly focusing

I had to refocus the lens when I reattached it, and realised a better way to get it in focus. The focus ring of the 16mm lens is not very precise compared with larger lenses, I’ve always struggled a bit with this. I tried a different approach this time.

I set the lens focus ring fully to ‘FAR’ and locked it off. Then released the screw clamping the sensor mounted rear focus ring. That’s a much finer screw thread, it has a more positive movement, and allows really fine focus adjustment. It is mentioned in the official instructions, but I think it’s the BEST way to focus if you’re being fussy.

With this, the ‘raspistill –focus‘ command trick and some patience you can get quite fine control over the focus. You DO need a monitor connected via the HDMI port though. The preview image does not appear through VNC or puTTY sessions.

As always, it’s best to close the aperture ring a little to increase depth of field. I always reduce it to F2.8 so it’s still bright, you can reduce further if you are having problems.

Light pollution

We sit just outside an expanding town which is switching to LED streetlighting. Light pollution is an increasing problem. I have purchased a simple ‘light pollution‘ filter to add to the 50mm Nikkor lens. I will be testing this as conditions allow, I wonder if it helps, and I hope it doesn’t block infrared!


Other builds

As mentioned earlier, quite a few makes are now underway with the Instructables project. The first ‘I made this‘ post has appeared (well done Jeff!), and from the messages I have seen there are a few nearing completion.

It looks like the most common pain points have been the PCB (see above) and sourcing the motors and worm gear components. Hopefully PCB sourcing is easier now with the published Gerber files. I saw a couple of cases where people shared a PCB order to reduce costs.

For the worm gear, I wonder if a future design could switch to using planetary gearboxes on the Nema17 motors instead. They seem to be more widely available, and can even be purchased as complete units. They may require a rethink of the drive mechanism, I have ideas already.

At least one builder is improving the weatherproofing of the design, that will be exciting to see when it is ready. I think there is a lot to learn from that development if it happens.

There are a couple of really interesting alternative motor controller designs out there too, including alternative ways to power/reset the Tiny2040 as well.


Off the shelf motorcontrollers

I mentioned in December that I haven’t found a suitable off-the-shelf motor controller board yet. Well, in the tech world, a month is a long time. I recently came across an announcement from Pimoroni about their new YUKON board. The specifications sound interesting. It supports high current stepper motor drivers, has a modular design and an onboard RP2040. There’s a fully featured builders kit available, but you can also buy the bare board and individual driver components. Pimoroni’s website has a couple of overview videos, and there’s an example robot project on Youtube by Kevin McAleer. I’d like to try one of these at some point, IF I find the time. Maybe someone else will give it a try?

Pimoroni's image of their new Yukon robotics board. The specification sounds really useful for telescope control too.
Pimoroni’s website image of their Yukon board. Haven’t tested it yet, but the specification sounds really useful for controlling the telescope motors without having to build your own PCB.

So, quite a bit to test now. Here’s hoping for some clear, calm, dark skies before the winter is over!

December 2023 update

Finally published

So the telescope project is finally out! 50% of the project time seems to have gone into making the instructions. Mainly because life is busier now than during the Pandemic, and partly because of all the lessons learned while making them. Like so much of the project, the instructions included a lot of firsts for me too. A few mistakes have turned up in the build guide, but I’ve always received very kind and positive feedback and corrected any mistakes as quickly as possible. I still need to complete a full ‘bill of material’ list though!

New issues


Feedback from builders has revealed a few issues, I’m expecting more items to appear in the coming weeks as people get the telescopes up and running. What worked for me, may not work for others, we’ll find out what was luck and what was bullet-proof soon! There are so many different ways to build every element of the project that there will ultimately be variations in every model.

PCB design

The PCB has been an unexpectedly interesting part, the build videos included a PCB that I made over a year ago as part of an exercise to learn how to use EasyEDA to design circuits. It included experiments and some development features – and also included a mistake in one of the tracks. But with some careful track sculpting with a Dremmel I got it running.

The circuit for the project is simple, and with hindsight could be even simpler. I understand that it’s more comforting to have a proven circuit board rather than building your own solution. So an immediate side project fired up, a few people were kind enough to offer help to create a proper design for the PCB which could be published. As I type this, I have 2 different prototypes on my desk for testing. If both pass the tests then I’ll add the Gerber Files into the GitHub repository so that people can get their own boards made up too.

I still wonder if there is a suitable commercially produced HAT that would perform the same function. I’ve not found anything yet which has the onboard logic AND powerful enough stepper motor drivers. If one ever appears it would be sensible to rework the project to make use of that. There are similar ideas out there for robotics I suppose, but I’ve not yet found an appropriate specification.

I’m running the Tiny2040 microcontroller on very low voltage, out of an abundance of caution really. When I measured it recently it’s showing about 2.5V across the Tiny2040. Apparently 3.0V is the recommended minimum, but two telescopes have been running nicely on 2.5V for a long time so far. However I’ll revise the component specifications with the new PCB design to increase the voltage a bit, that might increase tolerances for different designs.

3D printing

My humble 3D printer, limited printing skills and multiple design iterations meant that my builds took MONTHS to produce. You can imagine my amazement when people started posting photographs of the telescope structure nearly complete after just a few days! They are all really nice quality prints too. It quickly appeared that at least one of the published STL files was from an older iteration – but it is corrected now. I’m resisting the temptation to evolve the designs further at the moment until a few more people have got the current design up and running. Then I can have more useful insight into areas for improvement.

Simplified kit

One question that appeared quickly was ‘How do I make one if I haven’t got a 3D printer?’. It sounds like sending the STL files to a commercial printing company is too expensive, and probably there are too many parts to make them all at a local maker workshop. At first I thought that would rule out making the telescope completely, but after a couple of conversations I started to think differently about it. The only thing that you really need to get 3D printed is the mechanism. Basically the gears and cogs are useful, everything else can be made from any material you like. So I’m wondering if there’s a side project possible here, a cut down version of JUST the mechanism, maybe 10 parts, slightly redesigned so that they can attach to anything. A drive-only kit would be more manageable to get printed. I’ve not designed anything yet, but if I get another cloudy winter with little observation done it might make a good evening project.

New lens

I know that the project began with the question “Can the RPi camera components make a working telescope?”, but of course I’m now chasing better quality. I suspect a lifelong challenge. The telescope works mechanically well with the RPi 16mm lens, but that’s got about 20Degrees field-of-view so things are small. I have been using the 50mm Arducam lens for about a year now and that magnifies much better, about 5Degrees FOV, but the question in my mind now is โ€ฆ are the optics high enough quality?


I’ve noticed that even quite poor images are rescued very well by the stacking software, but I couldn’t help trying a higher quality 50mm lens. So I’ve fitted a Nikon-to-C adaptor and mounted a regular 50mm Nikkor SLR lens to the telescope. I grabbed about 20 frames during a brief unexpected gap in the clouds the other night.


Some immediate thoughtsโ€ฆ
Focusing is MUCH easier with an SLR lens. It was quite fiddly with the little C/CS lenses, but the SLR lens was producing crisp star points in minutes.
The camera cradle is JUST big enough to squeeze the 50mm lens in, but the weight of the lens is an issue. I modified the camera cradle to make use of the tripod mount hole at the bottom of the HiQ sensor. That should take the weight of the lens better and reduce the pressure on the rest of the PCB. It’s generally a better design even for the smaller lighter lenses.


It’s raised an unexpected problem though, the new images have a definite CYAN tint to them. Is that a feature of the SLR lens coating? Is it because the image is just more crisp? Is it only in the JPG images or is it in the .DNG raw files too? Experiments and/or other people’s advice is needed here.
I have STILL not been brave enough to take the IR filter off the sensor.

Software

The first few people to start builds have identified some fixes which are in the next software release, I really appreciate their patience while we get all this working easily for everyone. My main focus at present is to improve the problem solving capabilities of the software.
Some examples:

I am adding a feature to help tuning the telescope’s Tracking Function. You have to balance two or three parameters to get it running smoothly, so a tool to help find good values seems sensible.

Debugging communication has been an exercise for everyone, so I’m cleaning up some error messages to make things a little more clear there.

There are a couple of extra ‘version’ messages in the log files now so we can see what versions of components are installed.

Raspberry Pi 5

I could not resist so I ordered one onlineโ€ฆ and had to waitโ€ฆ meanwhile I was in the Raspberry Pi shop in Cambridge the other day and they had a bunch on the shelvesโ€ฆ I very nearly bought another.

The current project is restricted to the ‘Buster’ operating system and the RPi4B. (The RPi3B seems to work too, but is slower.) Both are aging, I fear something critical may one day become unavailable. I already had the problem that the Buster image vanished from the official installer tool about the same time I published the instructions. Luckily there is an archive of all the old versions available. So I need to plan for an updated build eventually.

The RPi5 is sufficiently different architecture that the setup and some functions will need rethinking. But if I can get the telescope working on the RPi5 there are useful new capabilities.

  • libcamera replaces the old raspistill camera handler. I’m hoping that makes handling of the RAW image data simpler. Let’s see.
  • The RPi5 of course is more powerful, which improves performance. Maybe onboard image stacking becomes viable if I can find a LINUX live stacker package somewhere.
  • The RPi5 supports 2 cameras simultaneously. This is very useful. Today the telescope uses a single camera for IMAGES and also for DRIFT TRACKING. In practice a single lens cannot be optimised for both functions, but 2 separate lenses solves that. I think a 16mm lens for the drift-tracking and a 50mm SLR lens for capturing the observations would be great. That may allow different tracking strategies too.

My heart sinks at the thought of fighting through all the package dependencies again, perhaps I should still wait a while for everything to stabilise.

Observing!

Of course the purpose of a telescope is to actually make observations! So I’m really hoping that we have a better winter than last year. So far the forecast has been quite poor, but we’re occasionally getting unexpected clear periods in otherwise total cloud cover. It means you have to keep an eye out for the breaks in the cloud because the forecasts are picking them up. A fully weatherproof telescope would be a real bonus here, then you could grab the brief random opportunities that present themselves. The light pollution has been quite bad around our village too, we’re close to the coast which seems to leave mist in the air, and recent building development is adding to the lighting problems. The need to make this thing fully portable is growing!

November 2023 update

Done! Finally published the telescope project on Instructables.

https://www.instructables.com/Pi-lomar-3D-Printed-Working-Miniature-Observatory-

That took a lot longer than I expected. If you want to really learn something, try to teach it. Writing and recording simple instructions for making the telescope meant revisiting every piece and stage of the project. Of course, there’s always something to improve whenever you look at even the tiniest element of the project.

So the published project is more accurate, more robust and more repeatable than the first version I made quite some time ago now.

If you’re interested in the code for it, that’s available on github.

https://github.com/Short-bus/pilomar

The published code is a simplified version which just focuses on the camera and motors to capture the images. I’ve another copy of the code which is more filled with experiments and development features. If I get a good winter of observations done this year, no doubt some of the new features may make it into the published project too.

There’s a first draft of a manual in the github repository too, it’s a .pdf in the /docs directory. This is for my benefit as well as anyone else!

If you’ve been waiting to see how I built the original telescope that was revealed on X (was Twitter) a couple of years ago then this Instructables project is the place to look.

What next?

I’ve a list of possible developments to make which I think will now roll forward into a ground-up rebuild for a slightly larger system.

The new Raspberry Pi 5 minicomputer supports 2 cameras now, that’s really useful in this project. I can have 1 camera dedicated to tracking while the other is dedicated to capturing the images for stacking. The two cameras can use different lenses too, so tracking can be wide-angle – which works best, and the ‘light’ image capture camera can be longer focal length.

The next version needs to be more portable. It’s possible to run the current one out in the field if you have enough extra bits of kit, but it could be vastly simplified.

I don’t know if it’s practicable to make it properly weatherproof, but I’d like to investigate that a little more too. I suspect the complexity and cost may kill this option, but it would be great if it could sit out permanently to make most use of the occasional observation opportunities that the cloudy UK provides.

I experimented with live stacking of the images a while back, although I got close, it wasn’t a perfect solution. However as the Raspberry Pi power increases it may be possible to implement one of the open-source live-stackers directly on the RPi and just feed images directly into it.

There were several dependencies in the project which restricted me to a legacy Raspbian OS, and most of the packages it uses are limited to specific versions. With the RPi 5 it would be time to upgrade to the latest O/S if I can get all the packages available again. This would also be the time to switch to libcamera and rework the camera handling to streamline it further. There is noticeable time lost in overheads at the moment using the original camera utilities.

I’m trying to decide whether to continue with the ‘remote UI’ that the published telescope works with, or would it be better to rethink that completely. Either a web UI that could be accessed via a smart phone, or perhaps just a small screen and keyboard sitting with the device to keep the complexity down. I don’t like having to rely upon the home wifi network, it causes occasional problems.

It would be great to make the resolution of the motors even finer, but I think I have to completely rethink the design for that. It probably needs a few experiments to decide which way to go with that. Switch from gears to belts? Convert from Alt-Az to Polar? Polar adds significant structural complexity I fear – but improves image alignment.

But for now, I’ll just compile a wish-list of new features, and spend the winter using the current telescope as it is.

In fact – I have 2 at the moment. The original one (with some tweaks) and the latest version that I built for the Instructables project page. Hmm…. what to do with TWO running at the same time?

April 2023 update

Build instructions

Life got spectacularly busy over the winter so the build instructions for the telescope are of course behind schedule. But Iโ€™ve been chipping away at it.

Iโ€™ve made some very amateur build videos which Iโ€™m currently editing and getting on to a draft Instructables page for the project.

Iโ€™m also slowly digging into GitHub for making the software and installation scripts available publicly.

Rotten weather

Itโ€™s been another winter of incredibly cloudy nights, including missing out on some fantastic aurora displays recently. Astronomy really requires some calm acceptance of what the universe throws at you doesnโ€™t it.

When I have been able to work on the project Iโ€™ve been testing, testing, testing. In bad weather or daylight itโ€™s hard to capture images of course. The software can simulate basic images if you cannot see through the clouds, that feature was originally just a learning exercise, but itโ€™s proven really useful this year.

Improved tracking

Recently we had a rare incredibly clear and still night, the best Iโ€™ve seen in years, but as we are now into spring skies the objects I have been photographing are disappearing into the west. And of course a beautiful full moon was slap bang in the middle of some new potential targets!

Anyway I decided to exercise the tracking features of the telescope on some new areas of the sky, and found a unusual behaviour when I use โ€˜astroalignโ€™ to detect and correct tracking errors. My attempts to make the tracking easier had actually been confusing it in some circumstances. I am very grateful to Martin Beroiz for his quick response to my question and learned some valuable improvements I can make to my tracking strategy.

Different lenses

In the interests of โ€˜I wonder what will happen ifโ€ฆโ€™ I also swapped out the 16mm lens for a 50mm lens. This gives a much narrower field of view and takes the telescope to the limit of motor precision. Iโ€™m not going to redesign this version, but itโ€™s been a useful exercise to finetune a number of features. If it can work well with the 50mm lens, it should be even more solid with the intended 16mm one. The telescope is still designed for the 16mm lens but this opens up the lens choice a bit.

The first 50mm lens I received had some slack in the mechanism, it took me a few nights to realise why I couldnโ€™t focus it properly and why all the stars looked like comets..but customer service at The Pi Hut was fantastic and I soon had a replacement in hand. Anyway it looks like the telescope works with lenses between 16mm and 50mm. So thatโ€™s a range of about 5ยฐ to 20ยฐ field of view.

Many Messier objects are quite small with the 16mm lens, so longer lenses make more targets viable.

Infrared Filter

Iโ€™m still looking for some insight into the benefits of removing the IR filter on the HiQ sensorโ€ฆ Iโ€™ve not found any clear confirmation if itโ€™s a real improvement for astrophotography yet. My limited understanding of the documentation suggests that it may not increase detection very far into IR anyway. I think I will wait before hacking the filter off, maybe itโ€™s an experiment for when I retire an earlier build of pilomar that I still occasionally use. (Yes; I have 2 in action. No; I have no idea how to do interferometry with them ๐Ÿ˜€)

Hardware shortage

The shortage of Raspberry Pis has been a bit of a frustration, Iโ€™d like to swap out for a larger memory version but thatโ€™s going to have to wait. And maybe now itโ€™s better to wait to see if a RPi 5 comes along one day. Having said that, keeping things running nicely on more modest hardware is good. I wonder if I could still get it running on a PiZero again??? A really stripped down version should still work, I think the working memory used by the OpenCV image arrays may be the challenge.

Other features

Iโ€™ve also experimented with some other features and have now decided which ones to keep and which to drop.

Memory cards

During the winter under heavy testing I noticed that the RPi 4 at the heart of the telescope suddenly slowed down significantly. After investigation I figured that the SD memory card was struggling, having saved/processed/deleted a great many thousands of images the card was starting to fragment or corrupt in some way. The SD card is the only storage for everything on the telescope. A simple solution is a quick rebuild of pilomar on a new SD card, but I guess this problem will keep occurring in heavy use. So the software now supports USB memory sticks as alternate storage. If found at startup pilomar will save images to the memory stick instead. If there is no memory stick, it stores everything on the SD card as before. The advantage is that the USB memory can be much larger so you can gather more images before offloading for stacking, AND you can now transfer images to other computers for processing by simply moving the USB stick across. The plugโ€™nโ€™play support feels a little shaky especially if you are running headlessly, but it seems to work reliably for me so far.

Live image stacking

I played with live stacking of the images too! I thought it would be too complex for my limited understanding but decided to give it a go anyway and see what I learned. It has some potential, I got closer than I expected to a working solution, but have decided to drop that feature for now too. Live stacking would be very interactive, but using a dedicated image stacker offline gives better results. So that experiment is going in the bin too.

Lens distortion

I studied lens distortion a bit as part of the livestacking experiment. The 50mm lens has very little in practice, but there is noticeable distortion at the edges of 16mm images. But the distortion was not generally enough to justify extra complexity yet. Dealing with it may enable some other features further down the road, but nothing serious at this point. So thatโ€™s going in the bin. It may return if I go back to live image stacking one day.

Observing conditions

As a result of the poor observing conditions recently I also added a weather monitoring feature. When you are sitting inside and the telescope is outside, itโ€™s handy to know roughly how the conditions are developing. I use the API from metcheck.com The telescope is not measuring local conditions directly, but using hourly forecast data from Metcheck. Thereโ€™s almost too much data available there, but itโ€™s useful for planning and monitoring observations; so itโ€™s going into the official version.

Metcheck.com provide a JSON API for various sets of weather data, including an astronomy biased dataset. I’ve found it really useful. Here’s an example for Aberdeen.

Quality control of images

I have wondered about detecting clouds in images so that they can be removed automatically, but havenโ€™t solved that yet. However the software will now detect meteor/satellite/aircraft trails using some OpenCV line detection routines. Thatโ€™s staying in the software. It’s useful to ignore unwanted images with trails, and also to spot meteors if you’re just capturing a meteor shower!

User interface

Pilomar currently uses a simple character based UI. The observation dashboard uses a small home grown character library that I made for other projects. There are several other UI frameworks available but so far Iโ€™ve considered that quite low priority to integrate.

I recently experimented with adding a simple web interface. It would be great to operate it directly from a mobile phone for example. I got a simple live feed of the latest image onto my phone, but itโ€™s quite a โ€˜can of wormsโ€™ to make it really slick! I think at the moment itโ€™s still too big a distraction and potentially better left for a full rewrite of the software sometime, so the software remains as โ€˜character basedโ€™ only for now. A web interface may be useful if I ever convert the telescope to being fully mobileโ€ฆ maybe running it all off a 12V car battery so I can get to remote dark skies. The current UI has some basic flexibility for terminal window sizes anyway, so could be run on smaller devices through a terminal emulator I guess. Need to try it.

Finally

So that’s it, been a busy software development winter, with very few chances to make observations. But I’m on the final push now to get the build instructions published. Nearly there!

September 2022 Update

It’s almost a year since I posted an update, so I thought something was due. I spent winter 2021/2022 using the Pi-lomar telescope, then used the lessons learned to make further refinements.

The main changes have been with the software, a few months of real life usage always reveals bugs and improvement areas. Of course there are also new ideas to explore too.

The brain

  • Raspberry Pi V4 (still)
  • A new O/S version is available based upon Debian V11 – Bullseye
  • I tried to build a new version based upon this, but found several showstoppers at my level of experience. The camera system has changed, and several of the packages that Pi-lomar depends upon are not yet happy in the new O/S. So at the moment I’m restricted to the previous Debian V10 – Buster build. The new camera support in Bullseye is really interesting, but will have to wait until I can get the basic functionality working.
  • I’ve done some work to simplify the build script on the Buster build as that’s now obviously stable and the very latest release seemed to simplify the build requirements a lot.

The muscles

  • The motor control has proven quite a challenge to get to a level I’m happy with. But finally it seems I’ve moved forward here.
  • Originally I drove the motors (via drivers) directly from the RPi itself. But a linux O/S is not ideal for realtime motor control. So a long project began to introduce a microcontroller to handle the motor control. I started with the Raspberry Pi Pico using the RP2040 chip. Oh boy! Did I have problems here. No matter how simple the task or how I chose to power it I had problems with random resets. I got through 4 of these boards before trying something else. I switched to the lovely Adafruit Feather RP2040 but had similar problems. I switched again to the great little Pimoroni Tiny2040 and the problems vanished, but I had fewer GPIO pins to work with, so some reworking of the CircuitPython code was needed.
  • The first ‘wiring’ of Pi-lomar was originally a real ad-hoc construction directly off the RPi GPIO header. It worked but looked crazy.
  • The second version with the flaky microcontroller first attempts was a little neater using a prototyping board.
  • When I switched to the Pimoroni Tiny2040 I also played with designing a custom circuitboard for it. A friend recommended trying EasyEDA as an online designer. After a few weeks learning that and playing with options I had a design that I figured was worth trying. I cannot overstate how impressed I was with the process to convert my amateur thoughts into an actual circuitboard.
  • I downloaded the ‘Gerber files’ from EasyEDA, sent them to PCBWay, and less than a week later had 5 bare boards on my desk. Manufactured and shipped from China to the UK in under 7 days. Amazing.
  • I’m building a 2nd copy of the telescope now to use for the construction video and it will incorporate this new board hopefully.
  • Anyway I have a CircuitPython routine now which will run nicely on a microcontroller and can handle motor movement independently of the RPi. The RPi can now just concentrate upon high level activities. User interaction, Planning and photographing things.

The software

  • The software on the RPi is still Python 3 based.
  • I’ve produced two versions now. Pilomar-lite – which I intend to publish, and Pilomar2 which has more functionality, but is also a playground for new experiments.
  • Both versions are ‘amateur’ builds, they could certainly be written more cleanly, but I’ve favoured functionality over beauty here.

Pilomar-Lite

This has all the functionality that the original version of Pi-Lomar had, with many bugs ironed out. It seems to work quite reliably, but hasn’t been tested by anyone else yet. That’s always a more thorough test!

Pilomar2

This has several extra experiments, most of which will probably die eventually, but some may prove useful. I’m currently trying to teach it to detect and compensate for lens distortion. There are some well established ways to do this using OpenCV, but they all require some extra steps and may prove complex to do in practice. So I’m playing with the idea that the telescope can gradually learn the distortion itself from actual observations and build its own compensation solutionโ€ฆ. the telescope works fine without this, but I want to be able to analyse a captured photograph and identify the actual objects in it more accurately in the future. If I get it to work, then other projects are possible. It may also help me to get the ‘image stacking’ working in realtime on the RPi directly one day. Image stacking is still done offline on a separate PC by downloading the individual images captured by Pi-Lomar.

Build instructions

A couple of people have shown interest in getting build instructions for the telescope. I must admit I thought this would be simple to do, but haven’t come up with a quick, clear and easy way to do the whole cake yet. I’m currently tackling this a couple of ways.

For the RPi build and software installation, I’m simplifying as much as possible what needs to be done.

The 3D printing files (.stl files) have been refined, and are pretty much ready to download, the V2 build will also verify that everything still goes together properly. And of course refreshes my memory of how I made the first one ๐Ÿ™‚

For the physical telescope build I realised that writing a manual is quite tricky to do, I’m clearly not as good as IKEA at this! So I’m currently considering just making some simple videos to show the construction process. That’s faster to make, and faster to view โ€ฆ I hope!

A simulation of the motor driver board. Home to a Pimoroni Tiny2040 to handle the DRV8825 motor driver chips.
… and an actual one…

October – Stars at last!

Darker nights and a few clear skies have appeared again. So I finally have some opportunities to see how well the summer’s development work has succeeded. Compare the very first photo I captured a year ago using the 1st version of the telescope with the latest V2 mechanism last night.

Andromeda Galaxy. Yes really! ๐Ÿ™‚ It you look very hard and use your imagination. This was the first image taken with the 1st version of the mini observatory.

The original has a slight smudge where the Andromeda Galaxy should be, lots of raw signal noise and some out of focus stars. At the time I was delighted because at least I’d FOUND the right part of the sky and managed to capture a couple of stars.

Last night’s image is starting to look more promising…

Andromeda Galaxy a year later. One of the smaller satellite galaxies is also visible just above it.

I’m happier with this 2nd image, its showing some progress, although I think it can be improved further.

As usual, every observation session uncovers new issues, and provides more ideas for improvement. This week’s list is to look at making the altitude movement smoother (there’s some slippage at a certain angle), and to debug the image tracking (I think I’ve introduced some glitches during the summer’s development work).

Pilomar September ’21 update

It’s been a very poor summer for astronomical observations around here this year. We’ve had very few truly clear skies until just the last couple of weeks. I’ve heard similar comments on some of the forums too. Fingers crossed it improves for us all now that the darker nights are here.

So during the year I’ve been concentrating upon improving the design of the Pilomar assembly. I’ve reworked the internal mechanism, and the electronics to iron out several limitations that became apparent in the first version. The most obvious limitations were :-

  • The bearing mechanism was slightly rough, causing the motors to slip occasionally. This could cause position errors. It also had a limited range of movement (180degrees azimuth range, East-South-West)
  • The Raspberry Pi was responsible for both taking the photographs and moving the motors. Linux isn’t ideal for realtime motor control, and sure enough it was proving difficult to take long exposures and keep the telescope moving smoothly.
  • The precision of the original version was 66 positions per degree. This was on the limit of the resolution of a pixel, causing some blurring on even short exposures.
  • It was quickly apparent that the deep-sky objects that I wanted to photograph were going to require longer exposures than I could reliably achieve.

So version 2 was born. I’ve reworked the bearings to allow more complete and smoother movement. The motor control is now performed by a separate microcontroller (Adafruit’s Feather RP2040). I didn’t want to include a microcontroller originally, it felt too complex, but it turned out to be the most direct way to overcome the Linux limitations. So I had a crash-course in microcontrollers and CircuitPython. In fact several crash courses! It took far longer than I hoped to get a working solution. The azimuth and altitude gearing has been reworked to provide 266 positions per degree, well within the tolerance of the camera to keep blurring to a minimum. The main Python program has also been extensively reworked to improve reliability and ease of use.

I can already think of things for Version 3! But this type of project never ends I guess.

Just this week while checking for an ISS pass I noticed that the sky was remarkably clear for the first time in many months, so rushed out the incomplete V2 to give it an initial real-life test.

The choice of observation target was a little limited, but I chose something challenging. The M27 Dumbbell Nebula. That’s a smaller and fainter target than the Orion Nebula that I captured in February. I had low hopes for any success on a first attempt. Just 11 images (20 seconds each) were captured before the clouds came in, and I have not yet properly focused the lens. But was delighted to see a tiny ‘hint’ of the nebula in the very first batch of images I captured. At this point the important fact is that it managed to track DURING long exposures and no star trails!

So I’m hoping for a few more clear nights so that I can fine tune things further. And also complete printing the DOME ! It’s naked at the moment.

Achieving ultra-crisp focus with the little RPi Hi Quality Camera 16mm lens is a real challenge. I’m currently investigating ‘Bahtinov Masks’ that are used to help focus larger telescopes. It’s proving fiddly to scale down to the size of a small lens, but experiments continue!