February 2026 update

Been a while since I updated the blog so thought I’d post some updates on the project.

Peace and quiet, a TMC2209 story

The original project uses DRV8825 stepper motor drivers. These were the most commonly recommended drivers when I was first learning how to drive motors from a Raspberry Pi computer many years ago. They are simple, but they are noisy. If you are young you can hear them whine at higher frequencies! The signals they send to drive the motor coils are quite rough. So the project works, but I was uncomfortable with the noise especially when making large moves. You can reduce the loud mechanical noise by switching to microstepping, but the high pitched whine seemed to be a permanent feature.

A couple of people in the pilomar project group on Discord have recommended (and used) TMC2209 steppers instead. I am grateful to those alternative builds, it gave me some initial pointers to how to convert the project to use TMC2209s.

TMC2209 boards are a similar size to the DRV8825 and almost the same layout, but more sophisticated devices. They have more options and most importantly much smoother control signals!

Original DRV8825 on left, BigTreeTech TMC2209 on the right.

One big advantage of the TMC2209 is that you can perform more functions AND more configuration tasks directly with the chip via a UART channel. The DRV8825 requires potentiometer adjustments and individual pin signals for everything. The TMC2209 does not which may make it easier for people to get the project up and running initially.

The challenge however was the UART communication! The original design uses Pimoroni Tiny RP2040/RP2350 microcontrollers and there are not enough pins available to add the extra serial communication lines needed. I needed more GPIO pins!

Tiny2350 vs Pico2: At last I have enough GPIO pins!

Very early in the project I had rejected the Raspberry Pi Pico RP2040 because it had proven too unreliable, with constant random resets that I could not eliminate. Now there is the Pico2 RP2350 available and after testing that the reliability is much better. So a large rewrite and restructure of the microcontroller was started. That took all of my spare time throughout 2025 – which explains why there have been no blog posts in 2025 either! Thanks go to Thomas Proffen too for kick starting the code restructuring for the microcontroller.

But after a lot of digging, learning, programming and testing I think I have a TMC2209 solution ready to roll back into the project. So far the results have been astounding, when I first ran the new solution I thought it was not working at all, the move commands resulted in no response from the telescope. After some panic I realised that it was in fact working but was so incredibly quiet and smooth that you couldn’t tell it was moving.

The TMC2209s have other tricks up their sleeve for future development, it looks like they can perform some actions even more autonomously than the DRV8825s, so there may be some more features to introduce still. But I’m really happy so far with the conversion. The telescope will finally sit and work on the desk in front of you and you’ll not notice!

I desperately need some clear calm nights now before the lighter nights arrive to really test the new board on some real observations. I want to verify that all the engineering changes are stable as much as possible before releasing the new
versions.

Some key points in the latest design

  • The motor power monitoring is back. You can now measure the motor power voltage again. This is useful when troubleshooting and also when running from batteries.
  • The board now hosts a Raspberry Pi Pico 2 (RP2350) with more stability and more capacity.
  • The DRV8825 sockets have been reworked to support the TMC2209 instead.
  • There are more GPIO pins available on the Pico 2, so more have been exposed around the board edge for future expansion.
  • 5V, 3V and 12V power points have been added to drive fans or other ‘light load’ accessories.
  • The Pico2 I2C channel is made available to support position sensors and other addons.
  • The ‘reset transistor’ circuit is eliminated for the microcontroller. This was to work around the lack of a reset pin on the Tiny2040. The new design supports a USB cable during operation better than before and the voltage supplied to the microcontroller is much higher now.

Another PCBWay order

By changing the microcontroller and stepper driver I had to revisit the PCB design again. I had already converted the project from EasyEDA to KiCAD in an earlier iteration, so it was time to re-open KiCAD and start adjusting the board design.

I had a new batch of the boards produced by PCBWay.com in China just before Christmas. The boards arrived as always remarkably quickly. I had them on my workbench less than a week after submitting the order. The speed of delivery alone is a great time saver in the project. I couldn’t make such a good quality board at home, and certainly couldn’t make a bunch of them at home as quickly as they can. These manufactured boards are also cheaper than the protoboards I was buying before! They look great, almost as if I know what I’m doing (I confess I’m still nervous!). I ordered the new ‘matt black’ finish on the PCBs and they look fantastic. I’ve used a different PCB color for each version of the design, but I think I’ll stick to the matt black from now on.

The new Matt Black PCB finish looks really neat.

I always assumed I would be building the project with protoboard circuits, but I love the compactness and reliability of a professionally manufactured board so I’m not going back now. My current development route is :

  • 1 – Breadboard : To learn what connections and components are needed.
  • 2 – Protoboard : To test reliability and operation.
  • 3 – Manufactured PCB : For reliable and compact deployment.
Which do you trust most? My protoboard mess or a manufactured PCB?

Having the boards made by PCBWay’s industrial process also increases the reliability of the circuits, I have less to worry about when assembling the final components. I typically order a batch of 5 or 10 boards so I have enough for my working builds plus a few spares for further experiments or repairs.

I usually only publish the gerber files in the GitHUB project but I see I can also create a kind of template order on the PCBWay website which would be fully configured and ready for anyone to place their own order too. That looks like a handy feature for people who are not confident in building or ordering manufactured PCBs.

I’m currently ordering unpopulated PCBs for development flexibility but I’m wondering about trying their more complete assembly service eventually too.

Testing went well, although I found one minor routing mistake in my KiCAD design. Luckily I was able to solve it with a couple of jumper wires, I didn’t even have to hack the board.

Application development

I’ve continued restructuring the code, but of course like all projects it’s also becoming larger as more features are added and more edge cases are handled by the code. So splitting into cleanly defined modules is sensible. Isolating the logic from the user interface more cleanly will also open the door to alternative UIs in the future.

Installation is still the same procedure as before, but there are now more files to transfer when you set up the project. The microcontroller code is very specific to the Pico2/TMC2209/PCB combination so I have split this into a separate circuitpython subdirectory in the github repository.

When using the TMC2209 drivers the software can now handle more of the driver configuration programmatically which makes it easier to get up and running. You don’t need to adjust potentiometers, measure voltages or calculate anything manually to get the motor running properly. Part of me wishes I knew about the TMC2209 at the start of the project, but the other part of me is relieved that I started with the simpler DRV8825s!

AI assistance

I started experimenting with generative AI to help with some programming tasks. Initially out of curiousity, but I soon noticed that it was very useful for a couple of areas.
1) Generating boilerplate code, simple routines that took time but were easy to define and verify.
2) Generating efficient code using algorithms, tools or techniques that I was not confident with.

The first case is relatively simple, as long as you know what code you are expecting and are confident to verify that it is correct and safe to use, it can generate some routines for you rapidly. I always have to go and adjust the code a little to fit with the project better, but it has been a timesaver a few times.

The second case is more interesting. There are two tasks where it helped me significantly.

I wanted to do some complex sky projections of data (see the aurora utility below), I managed to do the projection using my very basic trigonometry skills, but the result was SLOW and I knew for certain that there would be much faster ways to do the job. But I don’t have experience with more advanced transformations and transpositions of arrays of coordinates.
So I defined what data I had, what I wanted to achieve and which tools I wanted to use (eg numpy) and started an iterative development cycle with Microsoft’s copilot. It took some time and I had to restart the discussion three times, but I eventually learned how to discuss and get some useful code from it. The risk here is that I still don’t understand fully some of the calculations that it is performing – but I can verify the input and output data using my original code. You can of course continue the discussion with the AI to get more understanding of the code it wrote, it can make a patient tutor at times. The new code produced a significant performance improvement, my projection calculation went from about 90seconds to about 1 second.

Another task was to do with image cleaning. In the pi-lomar project I have some OpenCV filters, routines that I have written which perform various cleaning or enhancement tasks on images. They are mainly used by the target tracking to clean up an image of the sky, enhance the stars and eliminate any pollution or haze. Up until now I have been doing this by researching online and trial-and-error programming to get the results that I want.

I found that by defining the problem well with copilot I could get it to write code to do the same cleaning efficiently. What was really useful is that copilot can analyse your example images. I described the camera sensor and lens I was using, including that the IR CUTOFF filter was removed. Copilot then made some reasonable analysis of the images and what issues they could have. It then made some efficient OpenCV/Numpy routines to clean the images. This is still an iterative loop but opened a whole new way to solve future image handling problems. Once more I feel confident to try this approach because I can test input/output to see if I get the results I expect.

I have witnessed some truly weird bits of code being generated, so it’s critical that you understand what you are asking for, take time to explain it clearly to the AI, and definitely deeply study the suggested solution. Even if the result is wrong, sometimes you still learn new things though!

Utilities added

METCHECK.COM weather forecast

When planning observations it’s useful to know if the conditions will be right.
Here on the UK coast of the North Sea we don’t get many perfect nights so you don’t want to miss them when they come. There are lots of websites and phone apps for the weather, I use a few of those to plan the week ahead. When the actual night of observing comes I switch to a live data feed from https://www.metcheck.com/BUSINESS/developers.asp

For my location I have found this weather data feed really useful and generally accurate. METCHECK.COM make a weather forecast available for any latitude/longitude on Earth. You can download this as a json file from their website. The file is large and difficult to read in the raw json format so I made a terminal interface to show the information in a table. I have included this utility program now in the github repository for the project.

METCHECK.COM data viewer, demonstrating that it’s pretty cloudy here.

You can open a terminal window, go to the src directory and enter

python pilomarmetcheck.py

This command will display a table of colorcoded weather forecast information.
Each row is a different weather measurement. Each column is a time into the future. The wider your terminal window, the more columns can be shown.
The utility refreshes automatically every few minutes to keep the information up-to-date. The same source code src/pilomarmetcheck.py provides the class metcheck_handler() which you can use in other programs if needed.

from pilomarmetcheck import metcheck_handler

Check in src/pilomarmetcheck.py to see how it is used to extract and display the data. The metcheck_handler class is also included in the pending version of src/pilomar.py so that weather conditions can be monitored and also recorded in image metadata.

NOAA Aurora conditions

I hate to miss an aurora display, we’ve had some very spectacular displays during the current solar maximum but our notoriously cloudy nights mean it’s difficult to actually catch them. I noticed that all the online aurora visibility maps show the data from NOAA.GOV of the ‘ovation data‘.
This is a map of the likely intensity in the next 30 minutes for each lat/lon on Earth. If I understand correctly this is the ‘probability’ of seeing the Aurora directly overhead. But the aurora is usually 100km high in the atmosphere, so even if it is not directly overhead you may still be able to see it overhead a nearby location.

Can you see the aurora if it’s lat/lon is over the horizon? It depends how high it is.

So I wrote a ‘projection’ routine which estimates what the aurora may look like when viewing it from a distance. It is a simple terminal interface which refreshes automatically every 15 minutes. It shows the aurora oval as you might see it yourself. You can see the size of the oval and whether it is above or below
the horizon from your location.

Simple Aurora ovation viewer. Could the aurora be visible where you are?
In this case, no, I probably cannot see it 🙂

The online alerts are still the best way to START looking for the aurora, but if you have an alert ongoing I use this tool to monitor the movement and latitude of the aurora during the event. It relies entirely upon the probability calculate by NOAA, and there are many other issues that can affect your view of the
aurora, but it was an interesting challenge to program given my poor mathematical skills!

To run it go to the /src directory and enter

python pilomarovation.py

The same source code src/pilomarovation.py provides the class pilomarovation() which you can use in other programs if needed.

from pilomarovation import pilomarovation

Check in src/pilomarovation.py to see how it is used to extract and display the data.

Checking images on a terminal

Here’s a crazy one. I’ve developed the project as a character based application. But it’s really useful to be able to see images as they are being captured sometimes. Normally I use SFTP to transfer the .jpg files off the RPi and onto a PC for viewing. This is great, but sometimes I just need a very quick view of an image. I added a utility to display .jpg files through the terminal interface. It uses the XTERM 256 color palette to display a downscaled copy of the image.
I’ve found this strangely useful! So I’ve included it in the project repository now too.

A .jpg image of a tree rendered on a character terminal.

To run it go to the /src directory and enter

python pilomarviewer.py '[filepath]'

where ‘[filepath]’ is a .jpg filename or even a wildcard filepath. The ‘quote marks’ are important if you use wildcards.

It displays the image in the40x160 terminal window. You can pan/zoom around the image as you need. If the image gets updated on disc the display automatically refreshes. If you enter a wildcard in the filepath such as ‘/data/light_.jpg’ it displays the most recent matching image.

You will be able to launch this from the next pilomar.py program too and link it to the current observation automatically.

Observations

2025 was a terrible year for observations here, the summer was fantastic, but up here too light at night. The winters too cloudy. We have had maybe 3 good viewing nights so far this winter and 2 of them had a bright moon to complicate things. But I managed one night recently where I could test everything nicely for a few hours, and the whole package still works, actually when it works really smoothly you have to find a good book to read.

We’ve missed a few spectacular aurora displays due to weather, but sometimes caught a glimpse when the cloud has broken up. I still want to play with the ‘keogram‘ function in the pilomar software on a good aurora display.

Next steps

When the new release is ‘stable’ I will return to the plate solving problem. I’m determined to get a solution up and running for this, and I think that the AI tools will probably speed up the development for me now.

I’m in the process of updating the Github repository for the project with the last 10 months of developments and changes, hope to have all that published soon including the latest TMC2209 PCB design.

Spring is approaching so already my mind is turning back to more summer development tasks. I hope that some of the other builds around the world are in better climates and getting some good images back!

July 2024 update

Instructables builds

There are now 9 builds listed on the Instructables website, I’ve also had contact from other builders, I guess there are around 20 telescope builds out there. So there’s a small – but amazing – community forming for the telescope now. To create a sort of support group for the project I’ve created a Discord group for people who are building or running a copy of the telescope.

PM me via the Instructables project page for a link if you would like to join.

GitHub for pi-lomar updated

A large number of changes to the package are in the latest release just published on GitHub. There is a list of the changes, and some hints about upgrade options here.

The most significant changes in the new release are

Now runs on Raspberry Pi 5.

The GPIO handling is different on the RPi5, so I had to redevelop and retest the GPIO code to work there. This reinforces the advantages of switching to Bookworm 64bit O/S. The RPi4 and RPi5 both run pilomar happily on Bookworm now. Support for the RPi3B with the old ‘Buster’ build remains, but it cannot support some of the new features in this latest release. If you want to upgrade to the latest version I now strongly recommend a RPi4B or RPi5 as the main computer now.

FITS image handling.

With support from a few people in this project, and also from Arnaud and the Astrowl box project there’s now a way to save .fits format image files. FITS file formats are required by some astro image processing software. The raspistill and libcamera-still utilities will save raw images in .DNG format, but that is not accepted by some software. This new FITS format only works on Bookworm builds because it requires the picamera2 package to be available. You may be able to get this installed on earlier O/S versions, but I think it will need some tinkering. The FITS handling has been done by creating a new standalone Python routine (src/pilomarfits.py) which can be called just like the ‘libcamera-still’ command, but which generates .JPG and .FITS files instead. This is likely to improve further in the future, it’s just an initial attempt to add FITS format.

Smoother motor movement.

A whole bunch of ideas came from other builders, it became clear that microstepping is easy and safe to activate for most people. Microstepping makes movement slower, but smoother and quieter. It required some rethinking of the code on the microcontroller because microstepping generates a lot more motor pulses, a limitation with the clock in CircuitPython became apparent but is now resolved. There is also a ‘slew’ mode available in the latest package. This lets the telescope perform LARGE moves using full steps on the motor – noisy by fast. Then when it starts capturing the observation images it switches to microstepping.
Better microstepping support also means that you can build using the 200step stepper motors now. These are generally easier and cheaper to buy.

Easier configuration changes.

Pi-lomar’s configuration is generally held in the parameter file. You can make a lot of changes to the behaviour there. This latest release has moved even more of the configuration into this one file. However some changes can be quite complex to configure correctly. Therefore the latest software has added a few options to perform some common configuration changes directly from the menus. This ensures more consistent and reliable setup for a few camera options and also for configuring several of the microstepping related issues.

Aurora features.

After the spectacular Aurora displays earlier in the spring, I’ve added some experimental Aurora recording features to the software too. Obviously we now have to wait for some good Aurora displays to fully test this feature, but the basic concept seems to work OK. In Aurora mode the camera points to a likely direction for the Aurora and captures images as quickly as possible. It can also generate a simple KEOGRAPH of the aurora display which may be interesting to study sometimes.

Observations

Well, summer is here now, and the skies are too light, too short and sadly still too cloudy. So no practical observations of anything new to show this time. All the project work has gone into this latest software development round instead.
So I’m now looking forward to slightly longer and darker nights coming in August and September, and hoping that the clouds go away.

What’s next?

I’m currently exploring some modifications to the telescope design.

Now that the RPi5 is supported – it has TWO camera ports! So I would like to explore the idea of having two cameras mounted in the telescope. Ideally a 16mm lens dedicated to tracking, and then a 50mm higher quality lens dedicated to observation pictures. There is also some feedback from other builders which is re-opening the design of the camera tower and camera
cradle. I’m currently thinking to make a slightly wider camera tower to accommodate 2 cameras, and probably reorienting the sensors into portrait mode to improve access for focusing. It may make sense to improve the weatherproofing around the camera boards – as others have already done.

After a chat in the Discord group I’m also looking at adding a permanent ‘lens cap’ to the camera tower. This would sit below the horizontal position of the camera, so that the lens can be parked up against it when not in use. There are a
couple of advantages to this idea. (1) You don’t have to remember to remove or reinstall the lens cap. (2) If the cap is sufficiently dark the camera can take the ‘dark’ control images automatically at the end of each observation.

I have a redesign of the motorcontroller PCB nearly ready, with improved power performance for the microcontroller. There will probably be another couple of improvements made to it, and then I’ll try getting some samples printed up. I considered switching from the Tiny2040 microcontroller to something larger with more GPIO pins, but have decided to stick with the current setup. There seems to be a practical memory limit on the RP2040 chip in the microcontroller, it has around 200K of working memory available to it, and the current functionality consumes it all. I cannot even get the current code to run on CircuitPython 9.x yet, so it’s still limited to 7.2 and 8.2. It may be worth waiting to see if any 2nd generation microcontroller comes from RPi in the near future before finalising the design.

February 2024 update

Instructables builds

There are 5 ‘I made this’ posts on Instructables now for pi-lomar, and a few other builders have been in contact with questions and suggestions over the last two months. I’m really looking forward to seeing what people manage to do with the telescope and get some feedback and improvement ideas.

GitHub for pi-lomar updated

The January issues branch in GitHub became quite a monster, there is a list of all the changes available here. I’ll cover a few of the interesting items here.

UART communication problems

A few people have had problems getting the communication to work between the RPi and the Tiny2040. This has been due to a few different issues, but it became clear that more help was needed to identify where the problems are when communication doesn’t work. If something is wrong in the communication chain you might get an error message, or you might simply get a ‘dead telescope’ which refuses to move. So I’ve added features to detect and complain more clearly if something is wrong, and also a way to monitor and test the communication in realtime.

Software versions

The original build required very specific versions of CircuitPython and the Raspberry Pi O/S. I’ve addressed a few of the limitations now so you can use the most recent copies of both. I’ve now got a telescope running happily with Bookworm 64bit on an RPi4 and CircuitPython 8.2 on the microcontroller. This means you can use whatever the current version is – you don’t have to go looking for archived copies anymore. The released version does not work on the RPi5 yet – I’m going to rework the camera handling for that beast first.

Motor configurations

The original design was heavily dependent upon specific stepper motor designs. This was quite restricting for some because they are not always easy or cheap to source. The new software has moved the motor and gearing configuration into the parameter file instead of being hardcoded. So now it is simpler to set up alternative motors AND you can still take updates to the software without having to repeat your own hardcoding changes.

Removing the infrared filter

In the last blog I mentioned that I had removed the infrared cutoff filter from one of the camera sensors. I had to wait a while for a clear enough night, but eventually grabbed a few shots of the region around the Orion Nebula. It was not a great observing night, there was considerable haze and some random cloud, but I got a few images.

I am happy to confirm that it really made a difference though. New objects appeared and previously faint objects are clearly enhanced by expanding the vision of the sensor.

After stacking and some enhancement in The GIMP, this is the region with the new infrared capability. I was not able to remove ALL the haze, but if you are patient you can reduce it considerably.

For clarity I’ve marked the major items that are now visible below.

There is clearly a colour tint still to these images which I need to play with some more, but there are definitely new details here.

Orion Nebula

There seems to be a larger area of nebula visible in this image. The colour variation is not as good as earlier images but I think that’s something I can still work on.

Flame Nebula

This was faintly visible before the infrared cutoff filter was removed, but it seems to be more clear now. Hopefully when I can gather more images to stack I can pull more clarity from that still.

Horsehead Nebula

With the infrared filter in place you could see a very faint hint of the Horsehead Nebula surroundings, but they were very subtle. You had to know there was something there and then play with image enhancement to get even a slight hint of it. But it is now more clear.

Barnard’s Loop

Orion is surrounded by a large ‘infrared only’ area of gas. I’ve never seen this before in any observations I’ve made, but suddenly it’s there. Barnard’s Loop is to the left of the belt, and although faint, there’s no doubt it’s now detected. The gas cloud extends lower down around Orion too, but in this shot it’s hard to separate urban haze from actual gas cloud.

Urban Haze

This brings me to by current problems, urban haze. There is light pollution and generally poor quality atmosphere around here. I’m not living in a big city but the conditions are visibly deteriorating as time goes by.

The IR image above is taken with the 16mm lens, I have now removed the IR cutoff filter from the 2nd telescope with the 50mm lens too. That also has a light pollution filter added. The brief chance I’ve had to test it suggests that it does indeed make a difference to the haze that’s creeping into all the shots. The question is- does it also reduce the infrared wavelengths too? The next clear moonless night may answer that.

There’s another place where haze becomes an issue. That’s in the drift tracking mechanism of the telescope. Pi-lomar checks its position by taking a live image and comparing it with a calculated image of the sky. It uses the difference between star locations to correct the position of the camera. It’s not perfect, but it works well enough for the images to be stackable. But if there is strong haze in the sky the Astroalign package can struggle to recognise and match stars between the images. You can get a cloud of false stars at the bottom of an image which confuses things.

To work around that I use OpenCV to try to enhance the stars in the live tracking image. Basically trying to reduce noise and enhance just the real stars. This requires tuning some OpenCV filter functions to work nicely with MY particular observing conditions. That’s a problem for people in other locations, they may need to tune the filter functions differently.

So I’ve modified the software to make these OpenCV filter functions into ‘scripts’. You nolonger have to play with hardcoded function calls in the software, you can simply edit the scripts and test them rapidly against your conditions. I hope this is a good benefit for people. I will probably refine the configuration and testing further in future versions. This is clearly an area where a graphical interface would help. An early test of this new feature looks promising when trying to filter out tree branches from someone’s live tracking images. It looks like we can still pull stars out of quite busy and noisy shots.

Next steps

I am not intending to develop the software further now until the summer. The latest update needs to be taken into use and tested in more environments, so I want to limit any new changes to bug fixes or tuning related to that. Spring is approaching, it’s better to spend time observing!

December 2023 update

Finally published

So the telescope project is finally out! 50% of the project time seems to have gone into making the instructions. Mainly because life is busier now than during the Pandemic, and partly because of all the lessons learned while making them. Like so much of the project, the instructions included a lot of firsts for me too. A few mistakes have turned up in the build guide, but I’ve always received very kind and positive feedback and corrected any mistakes as quickly as possible. I still need to complete a full ‘bill of material’ list though!

New issues


Feedback from builders has revealed a few issues, I’m expecting more items to appear in the coming weeks as people get the telescopes up and running. What worked for me, may not work for others, we’ll find out what was luck and what was bullet-proof soon! There are so many different ways to build every element of the project that there will ultimately be variations in every model.

PCB design

The PCB has been an unexpectedly interesting part, the build videos included a PCB that I made over a year ago as part of an exercise to learn how to use EasyEDA to design circuits. It included experiments and some development features – and also included a mistake in one of the tracks. But with some careful track sculpting with a Dremmel I got it running.

The circuit for the project is simple, and with hindsight could be even simpler. I understand that it’s more comforting to have a proven circuit board rather than building your own solution. So an immediate side project fired up, a few people were kind enough to offer help to create a proper design for the PCB which could be published. As I type this, I have 2 different prototypes on my desk for testing. If both pass the tests then I’ll add the Gerber Files into the GitHub repository so that people can get their own boards made up too.

I still wonder if there is a suitable commercially produced HAT that would perform the same function. I’ve not found anything yet which has the onboard logic AND powerful enough stepper motor drivers. If one ever appears it would be sensible to rework the project to make use of that. There are similar ideas out there for robotics I suppose, but I’ve not yet found an appropriate specification.

I’m running the Tiny2040 microcontroller on very low voltage, out of an abundance of caution really. When I measured it recently it’s showing about 2.5V across the Tiny2040. Apparently 3.0V is the recommended minimum, but two telescopes have been running nicely on 2.5V for a long time so far. However I’ll revise the component specifications with the new PCB design to increase the voltage a bit, that might increase tolerances for different designs.

3D printing

My humble 3D printer, limited printing skills and multiple design iterations meant that my builds took MONTHS to produce. You can imagine my amazement when people started posting photographs of the telescope structure nearly complete after just a few days! They are all really nice quality prints too. It quickly appeared that at least one of the published STL files was from an older iteration – but it is corrected now. I’m resisting the temptation to evolve the designs further at the moment until a few more people have got the current design up and running. Then I can have more useful insight into areas for improvement.

Simplified kit

One question that appeared quickly was ‘How do I make one if I haven’t got a 3D printer?’. It sounds like sending the STL files to a commercial printing company is too expensive, and probably there are too many parts to make them all at a local maker workshop. At first I thought that would rule out making the telescope completely, but after a couple of conversations I started to think differently about it. The only thing that you really need to get 3D printed is the mechanism. Basically the gears and cogs are useful, everything else can be made from any material you like. So I’m wondering if there’s a side project possible here, a cut down version of JUST the mechanism, maybe 10 parts, slightly redesigned so that they can attach to anything. A drive-only kit would be more manageable to get printed. I’ve not designed anything yet, but if I get another cloudy winter with little observation done it might make a good evening project.

New lens

I know that the project began with the question “Can the RPi camera components make a working telescope?”, but of course I’m now chasing better quality. I suspect a lifelong challenge. The telescope works mechanically well with the RPi 16mm lens, but that’s got about 20Degrees field-of-view so things are small. I have been using the 50mm Arducam lens for about a year now and that magnifies much better, about 5Degrees FOV, but the question in my mind now is … are the optics high enough quality?


I’ve noticed that even quite poor images are rescued very well by the stacking software, but I couldn’t help trying a higher quality 50mm lens. So I’ve fitted a Nikon-to-C adaptor and mounted a regular 50mm Nikkor SLR lens to the telescope. I grabbed about 20 frames during a brief unexpected gap in the clouds the other night.


Some immediate thoughts…
Focusing is MUCH easier with an SLR lens. It was quite fiddly with the little C/CS lenses, but the SLR lens was producing crisp star points in minutes.
The camera cradle is JUST big enough to squeeze the 50mm lens in, but the weight of the lens is an issue. I modified the camera cradle to make use of the tripod mount hole at the bottom of the HiQ sensor. That should take the weight of the lens better and reduce the pressure on the rest of the PCB. It’s generally a better design even for the smaller lighter lenses.


It’s raised an unexpected problem though, the new images have a definite CYAN tint to them. Is that a feature of the SLR lens coating? Is it because the image is just more crisp? Is it only in the JPG images or is it in the .DNG raw files too? Experiments and/or other people’s advice is needed here.
I have STILL not been brave enough to take the IR filter off the sensor.

Software

The first few people to start builds have identified some fixes which are in the next software release, I really appreciate their patience while we get all this working easily for everyone. My main focus at present is to improve the problem solving capabilities of the software.
Some examples:

I am adding a feature to help tuning the telescope’s Tracking Function. You have to balance two or three parameters to get it running smoothly, so a tool to help find good values seems sensible.

Debugging communication has been an exercise for everyone, so I’m cleaning up some error messages to make things a little more clear there.

There are a couple of extra ‘version’ messages in the log files now so we can see what versions of components are installed.

Raspberry Pi 5

I could not resist so I ordered one online… and had to wait… meanwhile I was in the Raspberry Pi shop in Cambridge the other day and they had a bunch on the shelves… I very nearly bought another.

The current project is restricted to the ‘Buster’ operating system and the RPi4B. (The RPi3B seems to work too, but is slower.) Both are aging, I fear something critical may one day become unavailable. I already had the problem that the Buster image vanished from the official installer tool about the same time I published the instructions. Luckily there is an archive of all the old versions available. So I need to plan for an updated build eventually.

The RPi5 is sufficiently different architecture that the setup and some functions will need rethinking. But if I can get the telescope working on the RPi5 there are useful new capabilities.

  • libcamera replaces the old raspistill camera handler. I’m hoping that makes handling of the RAW image data simpler. Let’s see.
  • The RPi5 of course is more powerful, which improves performance. Maybe onboard image stacking becomes viable if I can find a LINUX live stacker package somewhere.
  • The RPi5 supports 2 cameras simultaneously. This is very useful. Today the telescope uses a single camera for IMAGES and also for DRIFT TRACKING. In practice a single lens cannot be optimised for both functions, but 2 separate lenses solves that. I think a 16mm lens for the drift-tracking and a 50mm SLR lens for capturing the observations would be great. That may allow different tracking strategies too.

My heart sinks at the thought of fighting through all the package dependencies again, perhaps I should still wait a while for everything to stabilise.

Observing!

Of course the purpose of a telescope is to actually make observations! So I’m really hoping that we have a better winter than last year. So far the forecast has been quite poor, but we’re occasionally getting unexpected clear periods in otherwise total cloud cover. It means you have to keep an eye out for the breaks in the cloud because the forecasts are picking them up. A fully weatherproof telescope would be a real bonus here, then you could grab the brief random opportunities that present themselves. The light pollution has been quite bad around our village too, we’re close to the coast which seems to leave mist in the air, and recent building development is adding to the lighting problems. The need to make this thing fully portable is growing!