Style | StandardCards

Sheffgeeks Blogs

Monday, 15. July 2024

caolan

Mutiny Test Suite and Announce API

-/-
-/-

Wednesday, 10. July 2024

caolan

State Management Using Signallers

-/-
-/-

Monday, 24. June 2024

caolan

Mutiny Chat Example

-/-
-/-

Wednesday, 19. June 2024

caolan

Promises and Memory Leaks

-/-
-/-

Monday, 03. June 2024

caolan

Daemon Structure and New APIs

-/-
-/-

Tuesday, 28. May 2024

caolan

Event Delegation

-/-
-/-

Monday, 27. May 2024

caolan

Message Protocol and Local Peer Discovery

-/-
-/-

Thursday, 23. May 2024

caolan

New Project: True Serverless Web Apps

-/-
-/-

Monday, 15. April 2024

tomwardill

Building a Balloon Tracking Cyberdeck

As with a lot of my projects, this one started with a message: David: “Sooo I think I want to launch a high altitude balloon…” Me: “Sounds like fun! Why? Got plans?” David: “I have this dream of sending a camera up really really high. From first attempts at research, looks to be plausible” There was some scope creep. The story of the balloon aspect of the
As with a lot of my projects, this one started with a message: David: “Sooo I think I want to launch a high altitude balloon…” Me: “Sounds like fun! Why? Got plans?” David: “I have this dream of sending a camera up really really high. From first attempts at research, looks to be plausible” There was some scope creep. The story of the balloon aspect of the project will be saved for another day, but this is about the tracking.

Sunday, 14. April 2024

caolan

Revisiting the Bramley

-/-
-/-

Wednesday, 03. January 2024

caolan

Destructuring Infinity

-/-
-/-

Wednesday, 02. August 2023

tomwardill

Adventure Racing Tips for Beginners

Recently, I had the pleasure of taking part in ITERA-lite 2023 as a pair with my good friend David. We crossed the start line (our main ambition) and made it to Transition 3 after 26 hours and 150km before my fitness gave up and we withdrew. However, in the process of preparing for the race, we were lucky enough to be supported by a bunch of people who have a lot of experience with this event and a
Recently, I had the pleasure of taking part in ITERA-lite 2023 as a pair with my good friend David. We crossed the start line (our main ambition) and made it to Transition 3 after 26 hours and 150km before my fitness gave up and we withdrew. However, in the process of preparing for the race, we were lucky enough to be supported by a bunch of people who have a lot of experience with this event and adventure racing in general.

caolan

Smaller JavaScript Using Module Exports

-/-
-/-

Friday, 28. July 2023

caolan

Smaller JavaScript Using Encapsulation

-/-
-/-

Monday, 24. July 2023

kitation

Disabled or not?

Unlike my neurodivergence and mental illnesses, I’ve always been reluctant to classify my physical health conditions as disabilities. That has changed a little in recent years as these conditions require me to take drugs that suppress my immune system, but then I would say that being immunocompromised is my disability. I want to talk in this post about being on that edge of disabled and not, and

Unlike my neurodivergence and mental illnesses, I’ve always been reluctant to classify my physical health conditions as disabilities. That has changed a little in recent years as these conditions require me to take drugs that suppress my immune system, but then I would say that being immunocompromised is my disability. I want to talk in this post about being on that edge of disabled and not, and the experiences of conditions that can be fine most of the time until they’re not.

Thursday, 24. November 2022

caolan

The Merits of Visible Alt-Text

-/-
-/-

Friday, 19. August 2022

caolan

A Very Simple Markup Language

-/-
-/-

Friday, 13. May 2022

caolan

Serving Markdown Direct from Apache

-/-
-/-

Sunday, 24. January 2021

benofbrown

Pure Data Objects Written in C

I’ve been using Pure Data for a month or two now to spice up the visuals when I stream live, and I’ve found it pretty fun but occasionally frustrating. Most of my frustration has been around finding objects (or combinations of objects) that do what I want to do. For whatever reason I’ve strugged to find specific things online so there’s a good chance that what I’ve done in the rest of this blog has

I’ve been using Pure Data for a month or two now to spice up the visuals when I stream live, and I’ve found it pretty fun but occasionally frustrating. Most of my frustration has been around finding objects (or combinations of objects) that do what I want to do. For whatever reason I’ve strugged to find specific things online so there’s a good chance that what I’ve done in the rest of this blog has been for naught, other than a learning experience for me.

The main thing I wanted was something I could use to switch between scenes I have set up. Each one has one or more gemhead objects that can be turned on or off. I had rigged something up using an hradio object and a select. Each outlet of the select outputs then went to a 1 message, and every other output went to a 0 one. Both of these messages then go in to a toggle object which is linked to all the gemhead objects for that scene. This gets very cumbersome, for example I have seven scenes I want to use, each one of these needs a 0 and 1 message, and each 0 message has to be connected to every select output except the one that corresponds to that scene. Here is a picture of the patch:

Yesterday I had a look at this excellent git repo, HOWTO write an External for Pure Data, and saw that it would be pretty simple to write an object in C, which I then did. And then I wrote a few more.

The object I created to help me with this problem is one I’ve called select8. It takes the place of the select object in this patch and has 8 outlets. Maybe one day I’ll change it to use a creation argument for the number of outlets but 8 will do for now. What it does is pretty simple. It takes in a float (such as that provided by the hradio object) and sends a 1 out of the corresponding outlet. The important change from the built in select output is that it also sends a 0 out of all the other outlets. This means that not only do I not need to have a 0 and a 1 message for every scene, I don’t need to connect the other outlets to each 0. This has cleaned it up a lot, as you can see from the screenshot below:

Now that I’d got bitten by the bug, I wrote some more objects. The next one was switch8, which I wanted to use with pix_video. You can only have one pix_video object for a given device and I wanted to be able to route it through different render chains, so this is what I use. It takes anything at all in to it’s first inlet, and sends it out of one of the eight outlets. Which outlet is in use is determined by a float you send to the second inlet. Like select8 the number of outlets is fixed, maybe a future version will be more flexible but that’s more than enough for me right now.

I’ve also written onchange and on1. These both take in floats, and emit bangs. onchange emits a bang when the input value is changed, on1 when the input value is just 1. I wrote these because of a side-effect I noticed after I modified select8 to advance to the next outlet when it receives a bang. I discovered that despite me using the explicit outlet_float function that would also be picked up as a bang, so I needed a way to filter these. I initially wrote onchange which almost did what I wanted, but it was really on1 that was the final piece of the puzzle.

That’s all I written for now, I’m sure I’ll write more in future. I’ve made them all freely available under the Clear BSD License over on gitlab: gitlab.com/benofbrown/pd-objects/

Monday, 28. December 2020

benofbrown

Streaming Pure Data GEM Video Via OBS

Recently I’ve been doing a bit of streaming with my modular synth and wanted something to make the video side a bit more interesting for anyone watching it. After a bit of searching I discovered Pure Data (aka Pd), and the GEM plugin for it, which adds graphical elements you can create and manipulate in Pd, so I thought I’d give it a go.

Recently I’ve been doing a bit of streaming with my modular synth and wanted something to make the video side a bit more interesting for anyone watching it. After a bit of searching I discovered Pure Data (aka Pd), and the GEM plugin for it, which adds graphical elements you can create and manipulate in Pd, so I thought I’d give it a go.

Getting audio in was pretty simple, I already use JACK for my audio routing, so all I had to do was configure Pd to use JACK as well, and connect everything up in qjackctl. Then in my patch I created an adc~ object which I could then route the left and right channels from my mixer out from.

After getting the gem package installed and tweaking the settings of Pd to actually load it I managed to get a quick demo up and running fairly quickly and gazed in wonder at the cube spinning in front of me. And then I hit my first problem. I stream at 1280x720 resolution, so wanted to set the GEM window to that size. Looking it up I saw that I could send it a dimen 1280 720 message before creating it and that would change the size of the window. That worked really well, until I tried to make my cube bigger, and straight away I noticed that no matter what I did it would get clipped out where the old window was. So although the window was bigger, I couldn’t put anything in the new space.

This took quite a long time to figure out, and in fact I never did figure it out. I ended up having trouble getting the vanilla Pd distributed with Debian to run with the plugins you need to be equivalent to Pd-extended and ended up installing something else entirely, Purr Data. Not only did it give me access to the objects I needed, it also fixed the annoying window problem, so I’d say Step 1: Install Purr Data.

Now I had a few cubes spinning around in the GEM window, I wanted to get those to show in OBS Studio, which is a really excellent program you can use to combine various audio and video sources in to a stream which you can push to YouTube, Twitch etc etc etc. My first attempt was to use the XWindow capture source in OBS. This worked, but not very well. I don’t know if it’s my window manager or just X11 but it was very flickery, which is no good at all. The next part of this blog is the whole reason I’m writing it, because it took me a stupidly long time to figure out and get working.

So far I’ve been enjoying Pd, but I’ve found resources online a bit lacking. It turns out the help in Pd (or at least Purr Data) itself is pretty good, though looking for things online didn’t really give me much joy, though I did find the v4l2loopback Linux kernel module. If you’re using Debian like me, then you can get this easily by installing the v4l2loopback-dkms package. I then loaded the module and set it up to create a device at /dev/video10. After some failed attempts I found this combination of options to work best:

sudo modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam"

Now, the label bit was copy and pasted from the v4l2loopback wiki, though the example there is using it as a destination for OBS rather than a source, so although the label isn’t strictly accurate for what we’re doing my OBS scenes are now configured to use it so I’m not changing it now.

To add it as a source in OBS it’s really very simple, you just add a Video Capture Device (V4L2) source and choose OBS Cam in the drop-down it comes up with. Now all we need to do is get GEM to send the video output to it.

Again I didn’t find that much info about doing this online, though a thread on the Pure Data Mailing List pointed me in the right direction.

The first thing I learnt is that the way to send the video to a v4l2loopback device is to use the pix_record object with the file to record to set to the device our modprobe command created earlier, /dev/video10. This object however expects a picture, not an object like our cube. So to get video of our cube to be sent via pix_record we need to render each frame as a picture, and map that as a texture on to another object, which is then sent to pix_record. To do this we use pix_snap to take a snapshot of what we’ve rendered, then pix_texture to use it as a texture, before finishing with a rectangle object. In order to preserve the aspect ratio of the original frame this needs to be rectangle 16 9. I translate this object so it’s outside of the view in the Gem window using translate 1 0 0 3, though you can leave it where it is if you want, it just looks a little odd. Finally this is sent to the pix_record object. We then send messages to that object to start and stop recording.

If anyone wants to use this method, I’ve attached an example patch gem-record-example.pd that should get you up and running, here’s the gist of it:

  • Install Purr Data
  • Install v4l2loopback - the v4l2loopback-dkms package on Debian derived distros
  • Create a v4l2loopback device
  • Make your scene in Purr Data with gemwin, gemhead etc
  • Add another gemhead with a higher priority number
  • Add a trigger a b to that, and connect both outouts to pix_snap set to capture the whole window n.b. the docs say it defaults to doing this but that didn’t seem to work for me
  • Add a pix_texture to that, then translate 1 0 0 3, then a rectangle 19 6. This is where the capture will be applied as a texture, it doesn’t need to be visible in the window, which is what the translate is for.
  • Add the final object, pix_record to the end of that chain. To start sending video to the device send the message codec v4l2, file /dev/video10, auto 1, record 1. To stop sending it send record 0.
  • Open OBS, and add a Video Capture Device (V4L2) source and choose OBS Cam.
  • Use OBS as normal, you should now be able to see the scene you creted in Gem, without flickering.

Wednesday, 18. November 2020

caolan

Bramley: Splash Screen and Shutdown

-/-
-/-

Bramley: 6. Splash Screen and Shutdown

Your browser does not support the video tag.

Communicating with the Bramley via serial cable is fine, but to use the device in my hands, I need to launch a program automatically when it's turned on and - using only the buttons - be able to shut it down cleanly again.

In this post, I hook into various parts of the startup/shutdown process to handle input and display messages to

Your browser does not support the video tag.

Communicating with the Bramley via serial cable is fine, but to use the device in my hands, I need to launch a program automatically when it's turned on and - using only the buttons - be able to shut it down cleanly again.

In this post, I hook into various parts of the startup/shutdown process to handle input and display messages to the user.

Startup

The first message you see in the video above is the 'Bramley' name bouncing impatiently. The animation is based on this GIF:

Decoding the GIF at runtime added around 2 seconds of latency. So in a moment of yak-shaving madness I created a Rust macro to turn it directly into code (I'll probably delete this later as it slows down compile times).

Then, to display the animation as early as I could, I created a systemd service file with as few dependencies as possible.

[Unit]
Description=Splash screen
DefaultDependencies=no
Conflicts=shtudown.target

[Service]
ExecStart=/usr/local/bin/splash

[Install]
WantedBy=default.target

Using systemd-analyze plot, I can check when the splash screen is displayed.

There's a hard floor of around 6 seconds before systemd starts launching services. The splash screen itself is launched at around 6.7 seconds. I could tweak the order a little, but it's unlikely to get significantly faster.

The problem with starting a program this early is that the kernel's SPI module hasn't been loaded yet, and at first I couldn't draw to the screen. My solution was to compile a custom kernel that included the SPI module statically.

Startup complete

Once the Pi is booted, I run another program to read input and shut down the Pi if I press all the buttons at once.

[Unit]
Description=Bramley menu

[Service]
ExecStartPre=/bin/systemctl stop splash.service
ExecStart=/usr/local/bin/menu
Restart=on-failure

[Install]
WantedBy=basic.target

Unfortunately, the full startup sequence is really slow. To reach the graphical target takes over 35 seconds as measured by systemd (wall clock time from power on is even longer).

pi@raspberrypi:~$ systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

graphical.target @35.464s
└─multi-user.target @35.452s
  └─ssh.service @34.837s +590ms
    └─network.target @34.754s
      └─dhcpcd.service @23.000s +11.730s
        └─basic.target @21.690s
          └─sockets.target @21.684s
            └─triggerhappy.socket @21.671s
              └─sysinit.target @21.557s
                └─systemd-timesyncd.service @18.386s +3.101s
                  └─systemd-tmpfiles-setup.service @17.280s +1.015s
                    └─local-fs.target @17.251s
                      └─boot.mount @17.086s +146ms
                        └─systemd-fsck@dev-disk-by\x2dpartuuid-5f8a5265\x2d01.se
                          └─dev-disk-by\x2dpartuuid-5f8a5265\x2d01.device @13.45

My menu program runs at basic.target, which is currently reached in just under 22 seconds. This is probably slower than it needs to be - Raspbian enthusiastically starts everything by default - and culling unecessary services should speed it up considerably.

I'm now wondering what an acceptable time-to-menu would be. As a child, I remember the GameBoy being slow, but acceptable, and that takes around 6-7 seconds from power on to menu.

Your browser does not support the video tag.

The Bramley currently takes around 12 seconds to display the splash screen and 34 seconds to display the menu. If I'm to approach that GameBoy benchmark of 6-7 seconds there's still a long way to go - I'm not even sure if it's possible with a Pi.

Of course, most modern devices cheat and don't power off at all. A standby mode may be worth investigating, but I'm going to leave that problem for another day. For now, I can shutdown the Pi while prototyping and that's enough.

Shutdown

To display the 'Shutting down...' message, I created another systemd service using the shutdown.target:

[Unit]
Description=Shutting down message
DefaultDependencies=no
Conflicts=menu.service

[Service]
ExecStart=/usr/local/bin/shutdown-start

[Install]
WantedBy=shutdown.target
Shutdown complete

At the end of shutdown, once the filesystem has been re-mounted as read-only, and just before halting the system, I display "Bye" - which due to how the screen works, fades out poignantly after the power is cut.

There's no systemd service file for this. Instead, the program is simply symlinked into the /lib/systemd/system-shutdown/ directory and systemd will run it automatically.

I now have basic control over startup and shutdown. Next time, I'll make my first attempt at chorded input.

Sunday, 25. October 2020

caolan

Bramley: Buttons

-/-
-/-

Bramley: 5. Buttons

Using the rppal library and the Raspberry Pi's GPIO pins, I've added support for the six Cherry ML switches on the back of the Bramley.

Raspberry Pi to Buttons
-----------------------
GPIO 16 -> Button 1
GPIO 20 -> Button 2
GPIO 21 -> Button 3
GPIO 25 -> Button 4
GPIO 24 -> Button 5
GPIO 23 -> Button 6

Reading from the pins is easy enou

Using the rppal library and the Raspberry Pi's GPIO pins, I've added support for the six Cherry ML switches on the back of the Bramley.

Raspberry Pi to Buttons
-----------------------
GPIO 16 -> Button 1
GPIO 20 -> Button 2
GPIO 21 -> Button 3
GPIO 25 -> Button 4
GPIO 24 -> Button 5
GPIO 23 -> Button 6

Reading from the pins is easy enough, but to correctly detect a key press, I've got some more work to do.

Interference

Without a pull-up or pull-down resistor, the value read from a GPIO pin will be floating. As a digital input, it will hover between 0 and 1, depending on background interference, and, if I'm not careful, produce unwanted key presses.

Normally, I rely on the Raspberry Pi's internal pull-up resistors to deal with this because they're easy to enable in software, and that's my comfort zone.

let gpio = Gpio::new()?;
let pin = gpio.get(16)?.into_input_pullup();

But, this time, despite enabling the internal resistor I was still experiencing 'phantom' key presses. Determined to exorcise them using hardware, I wired another 10K pull-up resistor to each switch:

GPIO pin      Button        GND
   |____________/ ___________|
          |
          Z  <- 10K resistor
          |
       3V3 pin

That seemed to block out the noise. I'm now ready to try and make sense of the signal.

Debouncing

An actuated switch doesn't produce a signal that's immedately high or low. Before stabalising, it will bounce between each state.

If I watch only for transitions from high to low (1 to 0), I would incorrectly count three keypresses. This could be avoided in hardware by using a capacitor to smooth the transition, but I'm going to do it in software where the solution is to take several readings and wait for the signal to stablise.

Normally, I make use of interrupts and timeouts to wait for a stable signal, but the code is always more complicated than I'd like. So this time I've decided to treat the Pi more like a microcontroller and just poll the pins at a regular iterval.

Every 3 milliseconds, I read a digital high/low from the six GPIO pins and emit new button events according to these rules:

  • Is the current value diffent to the last event emitted?
  • If so, were the previous 8 reads all high or all low (i.e. stable)?

If both of those are true, I can emit a new event. The code works something like this:

// Read a high/low value from the button's GPIO pin
let level = btn.read();

// We're only interested if the state has changed from
// what we last sent on the channel.
if level != state {
    // Did we read 8 stable values in a row prior to
    // this (i.e. all bits are 0 or 1)?
    if history == 0 || history == 255 {
        // Update the button's state
        state = level;
        // Send a new button Up/Down event
        tx.unbounded_send(state).unwrap();
    }
}

// Push the latest read onto the history integer by updating
// the bit at the end.
history = match level {
    Level::High => history.rotate_left(1) | 0b00000001,
    Level::Low => history.rotate_left(1) & 0b11111110,
};

// Wait 3 ms before polling again
thread::sleep(Duration::from_millis(3));

For the full context see the source code.

By waiting for 8 stable reads at 3ms intervals, I give the signal 24ms to stablise. And by storing the high/low bits in an unsigned 8-bit integer, I can easily compare it to 0 or 255 to determine if the signal is stable.

Compared to some other debouncing techniques (like using a capacitor), my approach has the drawback that it won't protect against brief background inteference during an otherwise stable signal - the hope being that this has been safely avoided by those 10K resistors.

This approach does, however, have a side-effect I like: provided the signal was stable beforehand, button presses are detected immediately. My aim is to reduce input latency wherever possible, because I know some screen updates might be slow.

What I'm unsure about is whether polling every 3ms is significantly more CPU hungry than using interrupts. Using interrupts I don't see any CPU usage because it happens inside the kernel, but for all I know it might be doing the same polling I'm doing. If anyone knows how Linux handles this I'd be curious to learn more. Anyway, I think the CPU use will be negligable either way.

Next time, I'll get a Rust program to automatically launch when the Pi is booted.

Next entry: Bramley: 6. Splash Screen and Shutdown