Categories
stories The Shouter Series

The Shouter

‘That’s like a mule kickin ya’. ‘Jasus, its makin me feel sick’.

Dermot and Declan, one time participants in the annual ‘Young scientist competition’ held every year in Dublin, Ireland; unwittingly experimenting on themselves with a new speaker design…

‘You know regular speakers, they use elastic to bounce the cone back and forth’ Dermot wondered one day while not paying attention in Maths class, or was it Physics? ‘What if we, ya know, used a metal cylinder as a driver, and used an electromagnet to drive it in and out of a box’ sketch sketch sketch.

‘It would mean we could make it move very slowly even if it was small’. Declan, who knew about sound, was sceptical to say the least, but, did admit that the requirement of having a larger cone to vibrate more slowly to produce bass notes would not be required if one could maintain control of the objects position exactly; ‘We could even focus the output’.

Experimentation proceeded in the shed out the back of Declan’s house. This also happened to be the home of “Radio Wow”, the dance orientated pirate radio station blasting the airwaves (and many nearby unprotected electrical devices) ‘Wowing you out every Monday to Thursday after homework yowsa‘. While prototyping the new ‘speaker’ (at this point they wanted to call it a ‘Shouter’) with Massive Attack’s ‘Teardrop’; had made themselves feel very uncomfortable- ‘intestines weren’t built for shaking Dermot‘.

Success achieved, the Shouter could achieve any low frequency, and up to ultrasonic. The loudness was a function of how long the central cylinder was, and how much power they put into it.

With four of them stacked in a box, they could also make the sound pulse highly directional, and more importantly, focused.

Cue the event, Dermot and Declan set up their display at the Young Scientists. Sleepless nights proceeded, dreaming of just how impressed everyone would be, not just of the ‘Shouter’ as it was officially named, but also of their amazing choice of music which which they would demo it! Declan, who knew about sound, had worked on the playlist all week. Dermot, had thought just how impressed all the female young scientists might be, and did no work at all.

They were asked to turn it down after approximately 47 seconds.

The organisers were not impressed. Dermot and Declan were not impressed. The ‘growing edible mushrooms on poop’ young scientists from the stand directly across from them were not impressed; they were also quite pungent having gotten quite the fright as ‘Whole lot of Rosie’ by ACDC accidentally focussed on them while they were in the middle of a particularly delicate part of their display setup.

To be continued….. and perhaps edited….

The Shouter (2)

Categories
Meta quick

I’m a writer Mom

I’m sitting at a table of people I mostly don’t know, and it’s gradually became clear that almost everyone here is “I work in IT”.

I try not to say “I work in IT”, fail, then try to describe what I actually do, and fail again. It doesn’t sit well with me. Hosted on AWS blah. If I said Carpenter, Architect etc etc, you might know what I do, or have an idea anyway. Shit, even “Software engineer” makes some sense.

So, my goal is to be a writer. Creating gives a good feeling, and is challenging in a different way. It’s not problem solving, it’s not fixing the world’s problems, or even understanding them; it’s creating something that, at best, is a mirror held up to the world. Maybe a funhouse mirror. I have written all my life, on and off, but have never being…. a Writer.

Listening to “slip of the keyboard” by Terry Pratchett; He always started a new book as soon as his last was finished, because if he wasn’t writing, then calling himself a writer was cheating.

So I’m flipping this over and saying as long as I’m writing…. I’m a writer.

Categories
Python

Kubernetes, moving from an entrypoint.sh to supervisord

For reasons (specifically that my kubenetes hosts have been up and down lately) I have had to harden my deploy.

First thing and the most important was to stop the graceful shutdowns of my rabbitMQ connections. I realised it would be way better to just sleep and retry on _any_ rabbitMQ connection error, particularly if rabbitMQ shut down gracefully.

Previously I was assuming that if rabbitMQ was shutting down gracefully then the whole app was.

How wrong I was, I was informed in no uncertain terms that I should be prepared for individual containers to be shut down and restarted without the whole pod getting a restart.

To that end the following changes were needed:


def getJobs():
    l.info("Running get Jobs")
    while True:
        try:
            global connection
            global channelM
            connection = pika.BlockingConnection(pika.ConnectionParameters('rabbitmq'))
            channel = connection.channel()
            channelM = connection.channel()
            channel.queue_declare(queue=myRabbitQueue)
            channel.basic_consume(queue=myRabbitQueue, auto_ack=True, on_message_callback=callback)
            l.info("Will now continue to run")
            channel.start_consuming()
        # Don't recover if connection was closed by broker
        except pika.exceptions.ConnectionClosedByBroker:
            l.error('Rabbit error connection closed by broker')
            break
        # Don't recover on channel errors
        except pika.exceptions.AMQPChannelError:
            l.error('Rabbit error channel error')
            break
        # Recover on all other connection errors
        except pika.exceptions.AMQPConnectionError:
            l.error('Retrying rabbitMQ listener')
            continue

to

def getJobs():
    l.info("Running get Jobs")
    while True:
        try:
            global connection
            global channelM
            connection = pika.BlockingConnection(pika.ConnectionParameters('rabbitmq'))
            channel = connection.channel()
            channelM = connection.channel()
            channel.queue_declare(queue=myRabbitQueue)
            channel.basic_consume(queue=myRabbitQueue, auto_ack=True, on_message_callback=callback)
            l.info("Will now continue to run")
            channel.start_consuming()
        # Recover on all other connection errors
        except pika.exceptions.AMQPConnectionError:
            l.error('Retrying rabbitMQ listener')
            continue
        except Exception as err:
            l.error(f'Error in connecting to rabbitmq, going to retry: {err}')
            sleep(5)

Note the removal of the “# Don't recover on channel errors” where it breaks. The Break broke the whole dam run loop!

And this brought me to my next problem. My app just wasn’t shutting down cleanly. After much messing I realised that the entrypoint.sh script (it had a few “wait-for-its” then started and backgrounded my various python modules) was not passing SIGTERM, so my app wasn’t shutting down properly at all.

Because of how it was built, this never mattered, but it always took the 60 seconds to destroy when I was pushing a new version… and it was a bit annoying to have it “not be workin proper” as they say.

So I am moving to supervisord. But if there’s one this I dislike it’s config files. So let me present a “translator” from entrypoint.sh to a supervisord conf file. It’s not perfect, but it beats writing it by hand. I hope someone else finds it useful.

What is confusing to me however, is it always seems so hard do just get simple best practices in deploying a simple-ish python app to kubernetes. I thought the whole point was to make it easier for developers. If I was running my app in a VM I wouldn’t have to care about any of this stuff!

I think the reason for this is I wrote the python app from scratch, and didn’t use a framework that had all the default configs ready to go!

Here’s the helper, I hope it helps:

def generate_supervisord_conf(entrypoint_file):    
    supervisord_conf = "[supervisord]\n"
    supervisord_conf += "nodaemon=true\n"
    supervisord_conf += "logfile=/dev/null\n"
    supervisord_conf += "logfile_maxbytes=0\n"
    supervisord_conf += "user=root\n\n"
    
    programs = []
    with open(entrypoint_file, "r") as f:
        lines = f.readlines()
        for line in lines:
            if line.startswith("pipenv run"):                                                                                                                                                                                                
                program_name = line.split(" ")[2].rstrip("\n").split("/")[-1].split(".")[0]    
                programs.append(program_name)    
        
    for program in programs:    
        program_conf = f"[program:{program}]\n"    
        program_conf += f"command=/usr/local/bin/pipenv run /app/{program}.py\n"    
        program_conf += "autostart=true\n"
        program_conf += "redirect_stderr=true\n"
        program_conf += "stdout_logfile=/dev/fd/1\n"
        program_conf += "stdout_logfile_maxbytes=0\n"
        #program_conf += "depends_on=is_it_up\n" # This is the wait-for-it script
        program_conf += "autorestart=true\n\n"    
        supervisord_conf += program_conf    
        
    return supervisord_conf    
    
    
entrypoint_file = "./entrypoint.sh"    
supervisord_conf = generate_supervisord_conf(entrypoint_file)    
    
with open("supervisord.conf", "w") as f:    
    f.write(supervisord_conf)    
    

I find myself liking programmatic generation more and more!

UPDATE! I forgot about the annoying logging problems when using supervisord… please see https://docs.docker.com/config/containers/multi-service_container/ I have updated the code above

Categories
personal quick

Improve your Zoom

Just a quick note, try playing some music in the background next time if you are in a zoom meeting (or teams or whatever). You can usually adjust the volume down low compared to the zoom audio. People on the call can’t here it on your headset, and it can really make the meeting nicer.

I like a bit of chill electro to add a bit of background noise. Try a cinematic score to liven up a boring meeting a bit 🙂
eg. https://www.youtube.com/watch?v=HC9ULm4HquU

Categories
Python

Reducing boilerplate with RabbitMQ and Python

Following on from my last post about sending Rabbitmq messages with shared code in python (caveat this is not for >100 message per second requirements).

Here is how to listen to a queue (and use it), with a reduced amount of boilerplate.

I also learned you can pass a function to an imported module to get that module to “callback” to the parent… to the code!!

This is the shared code (that lives in a .py file and is imported as from blah.py import *

#TheLawsandOrdinancesoftheCitiesAnkhandMorpork.py

def getJobs(officer, callback):
    l.info(f'Waiting for jobs for {officer}')
    while True:
        try:
            with pika.BlockingConnection(pika.ConnectionParameters('rabbitmq')) as connection:
                connection = pika.BlockingConnection(pika.ConnectionParameters('rabbitmq'))
                channel = connection.channel()
                channel.queue_declare(queue=officer)
                channel.basic_consume(queue=officer, auto_ack=True, on_message_callback=callback)
                l.info(f'{officer} reporting for duty sir')
                channel.start_consuming()
        # Don't recover if connection was closed by broker
        except pika.exceptions.ConnectionClosedByBroker:
            l.error('Rabbit error connection closed by broker')
            break
        # Don't recover on channel errors
        except pika.exceptions.AMQPChannelError:
            l.error('Rabbit error channel error')
            break
        # Recover on all other connection errors
        except pika.exceptions.AMQPConnectionError:
            l.error('Retrying rabbitMQ listener')
            continue

This ^^^ stuff was boilerplate in my last app, now I can reuse the code with the following… in the actual running app:

#Detritus.py
from TheLawsandOrdinancesoftheCitiesAnkhandMorpork import *

def callback(ch, method, properties, body):
    patrol = body.decode()                 
    print(f'I have received {patrol}')     

#Then from main() just call the shared code
main():
    officer = 'Detritus'
    getJobs(officer, callback)
    #This will then live in a while loop and "callback" to the supplied callback function. All you need then is a bit of error handling

# Standard boilerplate to call the main() function to begin
# the program.
if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print('Keyboard inturrupt')
        l.warning("Keyboard inturrupt")
        try:
            connection.close()
            sys.exit(0)
        except SystemExit:
            os._exit(0)

Again I hope the above formatting comes out well. I’m not sure if this is ok for high volume stuff, but for what I am doing (cheap parallelisation and using a messaging programming paradigm) it works good for me!

Categories
Python

Using RabbitMQ (pika) with Python

I have struggled finding what I would consider a nice way to connect to Rabbit MQ from a python program.

Some issues I have found that slack overflow answers are not great if you are reusing connections or have something that isn’t just a single sending program.

My use case, where I am programming around a narrative (in this new project I am using “The Watch” from Terry Pratchett’s Discworld.

In my last project I had what amounted to boiler plate code in each “Character” (python app). For this new project (the Quantum one), I wanted something a bit easier and cleaner.

So let me introduce TheLawsandOrdinancesoftheCitiesAnkhandMorpork.py

This is a “helper” module that I will import * from. This is how I wrote the messaging side. This produces less crap in the Rabbit MQ logs than calling a channel.close() also:

import pika
import json
import logging as l
exchange = ''

def sendMessage(person, patrol):                                                           
    try:                                                                                   
        patrol = json.dumps(patrol)                                                        
    except Exception as err:                                                               
        l.error(f'could not dump the patrol to a json')                                    
    try:                                                                                   
        with pika.BlockingConnection(pika.ConnectionParameters('rabbitmq')) as connection: 
            #connection = pika.BlockingConnection(pika.ConnectionParameters('rabbitmq'))   
            channelM = connection.channel()                                                
            channelM.queue_declare(queue=person)                                           
            channelM.basic_publish(exchange=exchange, routing_key=person, body=patrol)     
    except Exception as err:                                                               
        l.error(f'Problem sending rabbit message: {err}')                                  

On my screen that is a bit hard to read;

What I am doing is a function that takes the name of a “character” (ie. a queue) and a dict. I then encode this as a JSON, and using the “with” block, send it to the correct queue.

By avoiding the connection.close() call I no longer get and error in rabbitMQ saying connection closed unexpectedly.

It may not be the most efficient, not reusing an existing connection. But the connection bringup is tiny, and I would rather take cleanliness over efficiency in this case.

I will be running quantum simulations so the bottleneck will not be here!

Categories
personal quick stories

Quick story

Driving along a winding road, up the side of a hill.
It is deepening dusk, and the road curves upwards to the right.
The mountain side has some trees (pines) and short grass. It’s cold, maybe close to freezing.
The car is cold, the windscreen is still a small bit fogged.

The surface of the road changes to closely packed upturned feet, with little faces on the sole (just below the toes). They are all staring at you without speaking, as your tires loose grip on the shiny upturned flesh. Your car skids, careens across the road, and out over the edge of the abyss.

As the car starts tipping end over end, rapidly approaching the ground, all you can think of is the road made of upturned feet.

Categories
Python

Importance of being defensive when contacting external services

I just got caught by a “reliable” internal service which started to give timeouts.

I never configured a timeout on the connection (default was many minutes) which jammed the whole program.
It’s important to set aggressive timeouts in prod, better to error and figure out a way to accommodate the error than just wait.

Perhaps the next step is to make my program internally defensive in order to combat my poor coding skills.

Categories
Uncategorised

List of dicts to table with PrettyTable

Given:

summaryList = [{"Event Name": "Pod_Service_MTTR_Notify", "Count": 1}, {"Event Name": "Pod_Service_No_2_alert", "Count": 20}]
if summaryList:
    summaryTable = PrettyTable(summaryList[0])
    for i in summaryList:
        summaryTable.add_row(i.values())
Categories
Uncategorised

Rate Limiter / Cool Down timer in python3 and redis

My main source of data… jira service desk… cannot be trusted. Oh yes, 90% of the time it acts sane, but once in a while, someone misconfigures a big panda alert and we get 1000 new incidents in moments.

My colleague got flooded with pager duties and emails one Saturday, and I’m sure it made him cry. I need to rate limit these actions.

My plan is to use incr function to increment a redis key. The idea is that you create a new key every x seconds, with an expiry (equal to x) and then when you increment redis will respond with the current value, so you can just test the if loop.

First things first. I know how to do something every second int(time)) but every 10 minutes?

from time import time, sleep
while True:
     everyXSeconds = 10
     curTime = int(time())
     key = curTime - curTime % everyXSeconds
     print(key)
     sleep(1)

I’m only calling time() once because calling time() twice in the mod actually give different nanosecond values… and I just know I am going to hit some corner case where that crosses a second boundary and messes up EVERYTHING

BUT… the above is actually crap and not needed at all. That is a crap way to rate limit cause, for example, if I am trying to stop an email flood it will still send 10 emails every X seconds. As in, the count will be reset every X seconds.

To me the following makes a bit more sense:

while True:
    curTime = time()
    print(f"curTime is {curTime}")
    everyXSeconds = 10
    rateBeginTime = curTime - everyXSeconds
    count = red.zcount(queueName, rateBeginTime, curTime)
    print(f"current hit queue is :{count}")
    limitMap = {curTime: curTime}
    red.zadd(queueName, limitMap)
    sleep(1)
    remove = red.zremrangebyscore(queueName, 0, rateBeginTime)
    print(f"removed {remove} keys")

So thats just what I used to test…. but basically you are using a sorted list and adding and removing as needed