Category Archives: Development

Making a musical Robot Santa ornament using an ATtiny 85

William tests robot santa

In what is threatening to become a tradition, I made a Christmas ornament again this year. Last year I just made simple tree ornaments using sculpey and fimo.

This year things got a bit more involved, as I decided to make a musical model of the Robot Santa from Futurama. It was a good thing I started working on it in November, as it took quite a few evenings to get it all finished.

Continue reading

Porting chrss from Turbogears 1.0 to Django 1.3

For the a period of 18 months or so I slowly ported my chess by rss site (chrss) from Turbogears 1.0 to Django. When I started the port Django 1.1 was the latest version. I took so long to finish the port, that Django got to version 1.3 in the meantime! The slowness was mostly due to my then pending and now actual fatherhood. However by slowly toiling away, with the aid of quite a few automated tests I managed to deploy a Django version of the chrss the other week a couple few of months ago.

Python Framework to Python Framework

The good thing about moving from Turbogears to Python was that it’s all Python code. This meant that things like the core chess code didn’t need touching at all. It also meant that a lot of fairly standard code could be almost directly ported. Methods on controller objects in Turbogears became view functions in Django. Methods on model objects stayed methods and so on. A lot of the porting was fairly mechanistic. I moved all of the Turbogears code into a folder for easy referral and then built the Django version from nothing back up. Initially most of the work was done at the model/db level where I could copy over the code, convert it to Django style and then copy over and update the automated tests. I used Django Coverage to make sure the code was still all actually getting tested.

I could have opted to exactly match the database from the Turbogears version, but opted instead to make it a more Django like. This meant using the regular Django user models and so on. As Turbogears 1.0 involved a custom user model I had to create a separate user profile model in the Django port. There were a few other changes along these lines, but most of the porting was not so structural.

A lot of the hard work during porting came from missing bits of functionality that had far reaching effects. Testing was very awkward until a lot of the code had been ported already.

Cheetah templates to Django templates

Chrss used Cheetah for templates. Cheetah is not as restrictive with it’s templates as Django. It’s very easy to add lots of logic and code in there. Some pages in chrss had quite a bit of logic – in particular the main chess board page. This made porting rather tricky with Django. I had to put a lot of code into template tags and filters and carefully re-organise things. Porting the templates was probably the hardest part. Automated tests helped a bit with this, but a lot of the issues needed visual verification to ensure things really were working as they should.

One advantage of going from Cheetah to Django’s tempate language was the incompatible syntax. This meant I could simply leave bit’s of Cheetah code in a template and it would serve as quite a good visual reminder of something that was yet to be ported.

The second 90%

A good portion of the site was ported before my son’s birth. It seemed like maybe it wouldn’t take much longer, as it felt like 90% of the work was done. Of course it turned out there was another 90% yet to finish.

Beyond the usual tweaking and finishing of various odds and ends, the remaining work consisted of:

  • Completing the openid integration
  • Migrating the database

For the open id integration I opted to use Simon Willison’s Django OpenID app – hoping to be able to have a nice drop-in app. Possibly due to the slightly unfinished nature of the app and mostly due to my desire to customise the urls and general flow of the login/register process this took a fair while. It might have been quicker directly porting the original OpenID code I used with Turbogears, but it did work out ok in the end.

Of course it turns out that after all that hard work, that hardly anyone seems to use OpenID to login to sites anymore. I might have been better off integrating Django Social Auth instead, for Twitter and/or Facebook logings. However I decided that this would have been too much like feature creep and stuck with the original plan.

The chrss database isn’t very large. The table recording the moves for each game is currently over 70,000 rows, but each row is very small. So the database dump when tar gzipped is less than 3Mb in size. This gave me plenty of options for database migration. I’d roughly matched the schema used for the Turbogears site, but did take the opportunity to break from the past slightly. I created a Python script that used talked to the MySQL database and generated an sql dump in the new format. By using Python I was free to do any slightly more complex database manipulation I needed. The only real tricky part was converting to using Django’s user models.

One wrinkle with the database migrating was a bit disturbing. When I setup the app on webfaction I found some very odd behaviour. Logging in seemed to work, but after a couple of page requests you’d be logged out. The guys at webfaction were very helpful, but we were unable to get to the bottom of the problem. During testing I found that this problem did not happen with SQLite or Postgres, so was something to do with MySQL. This was one of those times when using an ORM paid off massively. Apart from the database migration script no code needed to be touched. If I’d had more time I might have persevered with MySQL, but Postgres has been working very well for me.

Conclusion

Chrss has been running using Django and Postgres for nearly eleven months now and has been working very well. I’ve had to fix a few things now and then, but doing this in Django has been very easy. I was also able to automate deployment using Fabric, so new code could be put live with the minimum of fuss. When you’ve only got a limited time to work with, automated deployment makes a huge difference.

Hopefully now that sleep routines are better established and my own sleep is less interrupted I’ll have a chance to add new features to chrss soon.

A Little Sparql

Quite often I’ll get distracted with an idea/piece of technology and mull it over for quite some time. Most of the time I play through these ideas in my head and don’t do anything with them. It’s often just an exercise in learning and understanding. I’ll read up on something and then try to think about how it works, what I could use it for and so forth.

Occasionally though I’ll feel compelled to actually write some code to test out the idea. It’s good to prototype things. It’s particularly handy for testing out new tech without the constraints of an existing system.

The latest idea that’s been running around my head has been triplestores. I’ve still yet to need to use one, but wanted to get a better understanding of how they work and what they can be used for. Having spent a large number of years in the SQL database world, it seems like a good idea to try out some alternatives – if only to make sure that I’m not shoe-horning an SQL database into something purely because I know how they work.

So rather than do the sane thing and download a triplestore and start playing around with it, I decided to write my own. Obviously I don’t have unlimited time so it had to be simple. As of such I decided an in-memory store would still let me learn a fair bit, without being too onerous. Over the past few weeks I’ve been working on this triplestore and it’s now at a point where it’s largely functional:

https://github.com/lilspikey/mini-sparql

To be honest writing this has been almost more of an exercise – a coding kata.

A few key points of the implementation:

  • PyParsing for the parsing SPARQL
  • Lots of generator functions to make evaluation lazy
  • Interactive prompt for running SPARQL queries
  • Load triple data from file/stdin (as triples of data in simplified turtle format)

As the data is all stored in memory it’s no good as a persistence store, but for quickly inspecting some triple data it works pretty well. Simple point the minisparql.py script at a file containing triple data and away you go:


$ python minisparql.py birds.ttl
sparql> SELECT ?name WHERE { ?id name ?name . ?id color red }
name
'Robin'

PyParsing

PyParsing is a great project for writing recursive descent parsers. It makes use of operator overloading to allow you to (mostly) write readable grammars. You can also specify what objects you want to be returned from parsing – so it’s quite easy to parse some code and get back a list of objects that can then execute the code.

It also includes quite a few extra batteries. One of the most powerful is operatorPrecedence, which makes it a piece of cake to create the classic arithmetic style expressions we all know and love. For example, the operatorPrecedence call in minisparql looks like:


expr=Forward()
# code removed for clarity    
expr << operatorPrecedence(baseExpr,[
        (oneOf('!'), 1, opAssoc.RIGHT, _unaryOpAction),
        (oneOf('+ -'), 1, opAssoc.RIGHT, _unaryOpAction),
        (oneOf('* /'), 2, opAssoc.LEFT, _binOpAction),
        (oneOf('+ -'), 2, opAssoc.LEFT, _binOpAction),
        (oneOf('<= >= < >'), 2, opAssoc.LEFT, _binOpAction),
        (oneOf('= !='), 2, opAssoc.LEFT, _binOpAction),
        ('&&', 2, opAssoc.LEFT, _binOpAction),
        ('||', 2, opAssoc.LEFT, _binOpAction),
    ])

This handles grouping the various operators appropriately. So rather than parsing:

5 * 2 + 3 + 2 * 4

And getting just a list of the individual elements you instead get a properly nested parse tree, that is equivalent to:

((5 * 2) + 3) + (2 * 4)

Where each binary operator is only operating on two other expressions (just as you would expect under classic C operator precedence).

This meant that implementing FILTER expressions was pretty straightforward.

Generators

Generator functions are one of my favourite features in Python. I was completely blown away by David Beazley’s Generator Tricks for Systems Programmers. He starts off with fairly simple examples of generators and shows how they can be combined, much like piped commands in Unix, to create extremely powerful constructs. Therefore it was obvious to me that generators were a good fit for minisparql.

The code for the regular, optional and union joins became pretty straightforward using generators. For example the union join looked like:


    def match(self, solution):
        for m in self.pattern1.match(solution):
            yield m
        for m in self.pattern2.match(solution):
            yield m

This simply returns the result of one query, followed by the results of another. The great thing about this is that if we only evaluate the first few values we might not even need to evaluate the second query. The better aspect though is that you can start getting results straight away. A more classical approach would entail building up a list of results:


    def match(self, solution):
        matches = []
        for m in self.pattern1.match(solution):
            matches.append(m)
        for m in self.pattern2.match(solution):
            matches.append(m)
        return matches

This code would always execute both queries – even if it wasn’t strictly necessary. To my eye the generator code is much clearer as well. yield stands out very well, in much the same way as return does. It’s certainly much clearer that matches.append(m) which does not immediately make it clear that data is being returned/yielded from the function.

Anyway, so here’s my silly little bit of exploration. I’ve learnt a fair bit and got to play with a some different ideas.

If I was to take this any further I’d probably want to add some sort of disk-based backend. That way it might be useful for small/medium-sized SPARQL/RDF datasets for simple apps. Much like SQLite for SQL databases or Whoosh for full-text search.

Django User Profiles and select_related optimisation

If you want to associate extra information with a user account in Django you need to create a separate “User Profile” model. To get the profile for a given User object one simply calls get_profile. This is a nice easy way of handling things and helps keep the User model in Django simple. However it does have a downside – fetching the user and user profile requires two database queries. That’s not so bad when we’re selecting just one user and profile, but when we are displaying a list of users we’d end up doing one query for the users and another n-queries for the profiles – the classic n+1 query problem!

To reduce the number of queries to just one we can use select_related. Prior to Django 1.2 select_related did not support the reverse direction of a OneToOneField, so we couldn’t actually use it with the user and user profile models. Luckily that’s no longer a problem.

If we are clever we can setup the user profile so it stills works with get_profile and does not create an extra query.

get_profile looks like this:


def get_profile(self):
        if not hasattr(self, '_profile_cache'):
            # ...
            # query to get profile and store it in _profile_cache on instance
            # so we don't need to look it up again later
            # ...
        return self._profile_cache

So if select_related stores the profile object in _profile_cache then get_profile will not need to do any more querying.

To do this we’d define the user profile model like this:


class UserProfile(models.Model):
    user = models.OneToOneField(User, related_name='_profile_cache')
    # ...
    # other model fields
    # ...

The key thing is that we have set related_name on the OneToOneField to be '_profile_cache'. The related_name defines the attribute on the *other* model (User in this case) and is what we need to refer to when we use select_related.

Querying for all user instances and user profiles at the same time would look like:


User.objects.selected_related('_profile_cache')

The only downside is that this does change the behaviour of get_profile slightly. Previously if no profile existed for a given user a DoesNotExist exception is raised. With this approach get_profile instead returns None. So you’re code will need to handle this. On the upside repeated calls to get_profile, when no profile exists, won’t re-query the database.

The other minor problem is that we are relying on the name of a private attribute on the User model (that little pesky underscore indicates it’s not officially for public consumption). Theoretically the attribute name could change in a future version of Django. To mitigate against the name changing I’d personally just store the name in a variable or on the profile model as a class attribute and reference that whenever you need it, so at least there’s only one place to make any change. Apart from that this only requires a minor modification to your user profile model and the use of select_related, so it’s not a massively invasive optimisation.

Blinking Halloween Spider Eyes Hats using PICAXE 08m chips

For some reason I got seized by the idea of creating some electronics for our Halloween costumes this year. In previous years I have gone as far as dying my hair green for Halloween, but that is just an evening’s work. This year we were having a more sedate affair – it being our first Halloween as parents.

We settled on making spider costumes. Basically black clothing, with extra arms made out of tights. I then decided that as spiders have eight eyes that making some hats with six extra eyes would make sense. After initially thinking I would just hook up some LEDs straight to a battery I decided to massively complicate matters by and try to make them blink periodically (at random).

So for this act of over-complication I chose to use the PICAXE 08m microprocessor. The core circuit (minus ability to program) is essentially just 3AA batteries, the chip itself and two resistors – so at least for a circuit involving a microprocessor wasn’t that byzantine. That is kind of the point of a microprocessor though. To the end-user it hides it’s internal complexity away. I don’t really need to know how many hundreds or thousands of transistors it contains internally. I only need to think about how it fits into my circuit.

I chose to have six LEDs to create pairs of “eyes” that would blink in tandem. This meant the PICAXE chip would turn three output pins on and off at random intervals. With a 4.5V power supply two LEDs wouldn’t need a massive current limiting resistor – 100Ohm would be sufficient. So the circuit consists of the 08m chip, two resistors (10KOhm and 22KOhm) to pull-down the serial-in pin, a 100nF capacitor to even the power supply, six LEDs and three 100Ohm current limiting resistors:

It’s not actually that complex a circuit, but spending several hours soldering it up (twice) for a couple of hours at a party might have been overkill. Still it was good practice. I also got to try out fitting everything into some project boxes and generally making things look “proper”.


Underside of circuit

Finished soldering the LED eyes

Making sure it all fits in project box

I used perfboard for the finished circuit, with some holes enlarged to accept screws for the project box. I also added some extra holes so I could loop the power and LED wires through. This helped to make the connections nice and sturdy.

The chips were programmed using a different circuit, with the serial adapter attached and were then removed from their sockets and inserted into the soldered circuits. The two hats had slightly different programs uploaded to tweak the speed and random nature of the blinking (so they wouldn’t be too similar). The code is viewable at my picaxe08m repository on github (as well as a Fritzing file for the circuit). I did encounter a bug in the PICAXE compiler whilst writing this code. When I started using subroutines (and the gosub call) MacAXEPad would suddenly say “Hardware not found” when attempting to upload. Apparently this problem only affects the Mac version of AXEPad. Luckily there was an easy fix – simply adding a dummy sub-routine at the end of the file:

interrupt: return


The Eye Hat

The finished result looked a bit goofy, but it did work. The main problem though was that it only really worked when the light was low. Not a bad problem for halloween I guess:



“Ultimate” Arduino Doorbell – part 2 (Software)

As mentioned in the previous post about my arduino doorbell I wanted to get the doorbell and my Chumby talking.

As the Chumby runs Linux, is on the network and is able to run a recent version of Python (2.6) it seemed like it would be pretty easy to get it to send Growl notifications to the local network.

This did indeed prove fairly easy and involved three main activities:

  • Listening to the serial port for the doorbell to ring
  • Using Bonjour (formally Rendezvous)/Zeroconf to find computers on the network
  • Sending network Growl notifications to those computers

Getting Python and PySerial running on the Chumby is pretty easy. The Chumby simply listens for the doorbell to send the string ‘DING DONG’ and can then react as needed.


def listen_on_serial_port(port, network):
    ser = serial.Serial(port, 9600, timeout=1)
    try:
        while True:
            line = ser.readline()
            if line is not None:
                line = line.strip()
            if line == 'DING DONG':
                network.send_notification()
    finally:
        if ser:
            ser.close()

port is the string representing the serial port (e.g. /dev/ttyUSB0). network is an object that handles the network notifications (see below).

The network object (an instance of Network) has two jobs. Firstly to find computers on the network (using PicoRendezvous – now called PicoBonjour) and secondly to send network growl notifications to those computers.

A background thread periodically calls Network.find, which uses multicast DNS (Bonjour/Zeroconf) to find the IP addresses of computers to notify:


class Network(object):
    title = 'Ding-Dong'
    description = 'Someone is at the door'
    password = None
    
    def __init__(self):
        self.growl_ips = []
        self.gntp_ips = []

    def _rendezvous(self, service):
        pr = PicoRendezvous()
        pr.replies = []
        return pr.query(service)
    
    def find(self):
        self.growl_ips = self._rendezvous('_growl._tcp.local.')
        self.gntp_ips  = self._rendezvous('_gntp._tcp.local.')
        def start(self):
        import threading
        t = threading.Thread(target=self._run)
        t.setDaemon(True)
        t.start()
    
    def _run(self):
        while True:
            self.find()
            time.sleep(30.0)

_growl._tcp.local. are computers that can handle Growl UDP packets and _gntp._tcp.local. those that can handle the newer GNTP protocol. Currently the former will be Mac OS X computers (running Growl) and the latter Windows computers (running Growl for Windows).

I had to tweak PicoRendezvous slightly to work round a bug on the Chumby version of Python, where socket.gethostname was returning the string '(None)', but otherwise this worked ok.

When the doorbell activates netgrowl is used to send Growl UDP packets to the IP addresses in growl_ips and the Python gntp library to notify those IP addresses in gntp_ips (but not those duplicated in growl_ips). For some reason my Macbook was appearing in both lists of IP addresses, so I made sure that the growl_ips took precedence.


    def send_growl_notification(self):
        growl_ips = self.growl_ips
        
        reg = GrowlRegistrationPacket(password=self.password)
        reg.addNotification()
    
        notify = GrowlNotificationPacket(title=self.title,
                    description=self.description,
                    sticky=True, password=self.password)
        for ip in growl_ips:
            addr = (ip, GROWL_UDP_PORT)
            s = socket(AF_INET, SOCK_DGRAM)
            s.sendto(reg.payload(), addr)
            s.sendto(notify.payload(), addr)

    def send_gntp_notification(self):
        growl_ips = self.growl_ips
        gntp_ips  = self.gntp_ips
        
        # don't send to gntp if we can use growl
        gntp_ips = [ip for ip in gntp_ips if (ip not in growl_ips)]
        
        for ip in gntp_ips:
            growl = GrowlNotifier(
                applicationName = 'Doorbell',
                notifications = ['doorbell'],
                defaultNotifications = ['doorbell'],
                hostname = ip,
                password = self.password,
            )
            result = growl.register()
            if not result:
                continue
            result = growl.notify(
                noteType = 'doorbell',
                title = self.title,
                description = self.description,
                sticky = True,
            )

    def send_notification(self):
        '''
        send notification over the network
        '''
        self.send_growl_notification()
        self.send_gntp_notification()

The code is pretty simple and will need some more tweaking. The notification has failed for some reason a couple of times, but does usually seem to work. I’ll need to start logging the doorbell activity to see what’s going on, but I suspect the Bonjour/Zeroconf is sometimes finding no computers. If so I’ll want to keep the previous found addresses hanging around for a little longer – just in case it’s a temporary glitch.

You can see the full code in my doorbell repository.

Programming a Picaxe 08m chip

After I got my Arduino I felt the urge to brush up on my general electronics knowledge. The last time I’d really played with any circuits was back in about 1994 when I was studying my Technology GCSE – which now is quite a long time ago. So I picked up a copy of Make: Electronics and started reading through it. Sadly I was a bit lazy and just read through the book, rather than actually building many of the circuits suggested. It did come in handy for tips on soldering and I do intend to go back and make some of the circuits – I just got a bit distracted by a new arrival. However the last chapter covered microcontrollers and in particular the Picaxe 08m chip. This really piqued my interest!

The Picaxe 08m is a PIC chip with a pre-installed boot-loader that allows programming the chip via a serial port. This is similar to the way that the Arduino has an AVR chip with a pre-installed boot-loader. There’s also some software for the Picaxe for handling the programming, a bit like the Arduino IDE.

The 08m is massively less powerful than an Arduino. It has 512 *bytes* of program storage and only 14 *bytes* of variable RAM. However it has 5 outputs/4 inputs and can run a single servo. In fact I could have used it for the logic part of my Arduino doorbell. It’s not quite as friendly to get started with as an Arduino though. You need to wire up a simple circuit before you can get going. In fact I actually had to solder together a prototyping board, as I didn’t have much luck trying to used a breadboard. The circuit needed isn’t that complex (assuming you use 3AA batteries for power). The cost of the 08m chip is less that £2. The proto-board is £2, so altogether you can have a simple usable microprocessor for less than £4. It’s also small and doesn’t use much power. So for projects where an Arduino is overkill it’s perfect.

Soldered picaxe 08m prototype board

To make testing out circuits easier I also added a female header to the board, so I could plug in breadboard jumper cables (in the same way as one does with an Arduino Duemilanove):

Picaxe 08m prototype board with header attached

To verify everything worked I wrote the “hello world” program of microcontroller world, a program that simply blinks an LED (by turning pin 1 on and off):

main:
	high 1
	pause 1000
	low 1
	pause 1000
	goto main

That’s Picaxe BASIC by the way. Maybe not as pretty as C, but for a processor with only 14 byte sized variables it’s more than enough.

Then I wired up an LED into the prototype board, along with a small current limiting resistor:

Picaxe 08m prototype board + header blinking LED

Using the Mac version of AXEPad and the Picaxe USB serial cable (which has a stereo socket at one end) I uploaded the program to the 08m chip and saw the LED blinking once every second. I’ve recorded a video of the programming, which should show how straightforward it is:

I’ve got a few projects in mind that are actually space/weight constrained and one in particular that might put the parts involved into extreme peril, so having the option of using a very small and cheap microcontroller is very appealing. I’ll reserve the Arduino now for projects that need it’s extra power and amazing flexibility.

“Ultimate” Arduino Doorbell – part 1 (Hardware)

After a first couple of small Arduino projects I felt the need to make something a bit more useful and permanent.

I had seen Roo Reynolds talking about hacking his doorbell to get it onto Twitter and as our doorbell is a bit rubbish I thought this seemed like a good project. His hack only updated Twitter, so there was no direct physical sign that the doorbell had been rung. Given that our current doorbell is pretty rubbish anyway (it should ring, but just makes a rattling sound), I decided to start off with the physical/audible side of things and then later move on to pushing notifications out to laptops on the local network or beyond…

I’ve been documenting my progress on Flickr now for a while.

Take apart wireless doorbell

Following Roo’s approach, I picked up a cheap wireless doorbell from Tesco for about £9 and took it apart:

I was able to verify that I could turn an LED on/off with the doorbell at this stage, so I knew I could use that as a signal in the Arduino. Using a multimeter also let me figure out some more details about the voltage and current used to power the speaker:

Testing wireless doorbell with multi-meter

This was about 3.1 V and 75mA output. That’s a few milliamps more than the Arduino can tolerate, so I knew I had to make sure I put a resistor (22KOhm) in between the doorbell and the Arduino. The doorbell would also need to draw power from the Arduino, but luckily the Arduino has a separate 3V output, so that wasn’t an issue.

Testing connecting Arduino to doorbell

From there I verified that when the doorbell was activated I could read a drop in voltage from the wire on the doorbell labelled “SPI”. At this point I had the input for my Arduino.

Servos and prototypes

The next step was to provide some sort of audible output. Originally I had thought about just adding a buzzer, but fancied something a bit more old-fashioned. I figured it’d be good fun if the nice 21st century tech of the Arduino used some rather more ancient technology to create sound – a bell.

Using a micro servo from Oomlout I created a couple of prototypes.

I made a pretty simple circuit to hook up the servo, doorbell and arduino:

The first prototype used some lego and a bell from a Christmas ornament:

This worked, but the servo was probably louder than the bell!

Next I got hold of a small brass bell from a music shop. Luckily the top unscrewed and would let me easily attached it to a piece of lego. So another prototype was created using the small brass bell:

Prototype doorbell with brass bell

This worked much better!

The bell is about 100g in weight, which isn’t massive, but when it’s flung around the lego prototype would tend to move too. Not so good when this will live on the kitchen window sill. So I’d need something sturdier for the finished version.

A little soldering

As well as trying out a better bell I also made the circuit for the bell and servo more permanent, by soldering everything up on a piece of stripboard. I also used a few headers, so most of the circuit would plug into the Arduino – like a rudimentary shield. The wire to the pin controlling the servo, was left to be plugged in separately. I hadn’t soldered anything since school, so I ended up swearing quite a bit trying to do this. For me it was probably the trickiest part of the whole exercise, but it did work in the end. Just glad I had a multi-meter with a continuity mode to find the short-circuits – a knife worked well to clear up some of those.

Stripboard soldering

Full soldered doorbell + arduino

Woodwork

I next turned to my (t)rusty woodwork skills to create a sturdier version. First off some pieces of wood to mount the Arduino, doorbell circuit and servo:

Arduino and doorbell mounted on wood block

Servo mounted in wooden block

Next came the base and first part of the bell moving arm:

Then I added the top piece of the arm as well as the picture wire to make the bell swing properly:

Closeup of bell innards

At this point everything worked as expected:

Tidying up

After the sawing and sanding was finished there was still some general tidying up to do.

Once I’d taped down the servo wire and antena wire with electrical tape I needed to deal with the movement of the whole piece when the arm swings out. First I got hold of some Sugru and made some small rubberised feet to prevent the bell sliding around:

Sugru rubber feet on bell

The next task was to add a counter weight so the bell wouldn’t tip over. For this I made two piles of fifteen pennies, which I wrapped in electrical tape and taped to the ledge on the reverse side from the Arduino:

Counter weights on back/front of doorbell

Eventually these will be covered up and will probably actually make up part of the internal structure of the outer decoration. This should also mean they’ll end up being attached better. For now though electrical tape has proved sufficient.

One final tweak involved inserting some small nylon washers either side of the arm:

Washer and tidier wire

This was done to help make the movement of the arm more consistent and slightly smoother. Previously I was finding that the arm would shift position slightly, which sometimes caused problems for the servo.

Deployment

With the hardware finished I then set about actually getting it all up and running as our real doorbell. So imagine my horror when upon placing the doorbell on our kitchen window sill and connecting the power everything went haywire and the arm just constantly moved! After some tweaking of the code and kitchen-side debugging I realised that I really should make use of the Arduino’s internal pullup resistors for the doorbell pin. Up until that point I had left it “floating”. When powered by the USB this rarely drifted to zero, so I thought it was working fine. However on mains power it regularly dropped quite low, making the Arduino think the button had been pressed, triggering the servo. Configured the pullup resistor and tweaking the activation threshold (as it was now higher than zero) sorted out this problem perfectly:

Conclusion

So we now have a slightly Heath Robinson doorbell. It works better that our old one. There’s also plenty of scope for making it do more too. In fact phase two will be mostly about connecting the doorbell to my Chumby to get it communicating with the local network.

Check out my doorbell git repository on github for the code, as well as a fritzing circuit file.

Kite Physics

I’ve finally managed to finish the Kite Physics Demo I started back in October:



Kite Physics Demo

It’s more of a plaything than anything else. It did give me a chance to try out writing some physics code again, something I don’t get much of a chance to do during web development. Though of course advances in Javascript speed and the Canvas tag may change that in the future.

Originally I made use of the keyboard to control the figure and the kite, but I switched to using the mouse as it makes interacting with the scene more pleasing. There’s a certain joy to being able to grab the kite to move it around or shift the wind direction by nudging a cloud.

It’s all written in Java, using maven to build and package. The code is available in my kite repo on guthub.

There’s about one third of a physics engine in the code. There’s no real collision detection for example. Most of the physics code is for handling constraints and basic forces. For the constraints I made use of the Gauss-Seidel technique, as outlined in Advanced Character Physics by Thomas Jakobsen. This was quite a revelation to me. Previously I’d experimented with rigid body dynamics (see my 2D racing game circa 2001), but the Gauss-Seidel method seemed much more intuitive. I found it much easier to think of bodies made up of points connected by lines. The “physics” naturally emerges from the way the bodies are then constructed.

The core physics code (in the com.psychicorigami.physics.* package) is fairly well structured. There a bit of compiler placation to (theoretically) allow for the same code to work for 2D and 3D physics. I’ve only actually implemented the 2D side of things, but have tried to use generics to make room for 3D too. When everything is points and lines, moving from 2D to 3D is not that tricky. Whereas for rigid body dynamics this can mean quite a big change in representation.

The code in the default package is used for the specifics of the demo itself, so is a bit messy really. Lots of tweaking and prodding was needed to get things how I wanted.

The physics for the kite and wind interaction were aided massively by this guide to forces on a kite. Particularly useful for understanding how to tweak the rigging, centre of gravity and centre of pressure of the kite to make it behave in a pleasing – if not entirely realistic – manner.

At some point I hope to revisit the physics code here. I’ve always had a wish to create something like Karl Sims “blocky creatures”, but lacked the specific technical knowledge. Know that I have a technique for simulating complex constraints in an intuitive manner maybe I’ll manage it yet.

watch-cp.py

During software development it’s key to minimize the time between editing code and seeing the results of a change. During web-development the constant tweaking of CSS/HTML/Javascript etc means you’re always reloading the browser to see changes.

At work I do a lot of Java web development, which normally involves compiling code, packaging into a .war file and deploying it to Tomcat (running locally). I make use of the maven tomcat plugin, so it’s just a case of calling maven tomcat:redeploy. However it still takes tens of seconds (or more if there are tests to run). For quick tweaks of css it’s nice to be able to short-circuit this process.

Tomcat unpacks the .war file to another directory after the app has been deployed. All the .jsp pages, css and javascript files can be edited in this directory and changes can be seen immediately. However getting into the habit of editing these files within this directory is usually a bad idea, as the files will get overwritten at the next deployment.

We’ve been using compass lately for css and it has a handy command:


    compass watch

That monitors .sass files for changes, then re-compiles them to css as they change, so you can see changes quickly (without needing to manually run compass each time).

So I thought I could do something similar for the editable files within the war file. So I created watch-cp.py. It simply monitors a set of files and/or directories for changes and copies over files that have changed to a source file or directory. To provide a bit of feedback it prints out when it spots a changed file, but beyond that it’s pretty brute-force in it’s approach.

watch-cp.py works in a very similar way to the cp command, but without as many options. For example:


    # recursively copy files from src/main/webapp to tomcat
    watch-cp.py -r src/main/webapp/* ~/tomcat/webapps/mywebapp

This is great as it means I can edit my sass files, have them compiled to css by compass, then copied over to tomcat without needing to intervene. It takes a second sometimes, but it’s fast enough for the most part.

Feel free to fork the gist of watch-cp.py on github.