A friend wants a Knight Rider style Larson scanner for the back of his cycle helmet. Simple enough. Especially now we have intelligent LED strips available, which only need power plus one data wire to drive a load of LEDs.
Note: if you want to do this sort of thing, make sure you understand the dangers of having LiPo batteries close to your head, and take suitable precautions – see the note at the bottom of this post.
But first things first: I knocked out some very quick and dirty code to do a nice scanning effect (thanks Adafruit for the WS2812 library! Doesn’t half save some time) My code is here. Nothing fancy, but it has a nice decay effect, and a gamma lookup table to keep things pretty. Used an Arduino Mega board to do the testing:
Man, I hate drilling holes in PCBs. I make my boards with a mill, so it shouldn’t be too hard to swap the V-cutting bit for a drill bit, but I Just. Can’t. Be. Arsed.
And besides, I like making things as small and slimline and dinky as possible.
The little PCB above is a backpack for an LCD. Couldn’t avoid having to drill holes along the top to connect to the LCD itself, but everything else can be surface mounted.
But sometimes you need a way to connect, say, an nRF transceiver, or one of those newfangled ESP8266 WIFI module to your PCB, and they come already fitted with a pin header. And that would mean drilling more holes, and having the module standing off the board slightly. Yuck.
So here’s my hacky approach. First, design your PCB so the pads are on the top of your board, rather than the bottom: we’re going to take advantage of the fact the little transceivers are always double-sided boards, with through-hole plated holes.
Transceiver on the left, my board with its 8 pads on the right:
First step, get that header off the transceiver. With a blade, you can carefully lever off the plastic spacer tying the header pins together – do it gently, a little at a time, working from both ends of the header.
Then you can remove the pins one at a time nice and easily. Last one:
Next step – with the board held vertically, soldering iron on one side and solder sucker on the other, you can clean out all the solder from the holes:
Important – when you’ve removed the pins, you need to make sure the now-empty pads on the bottom of the board are tinned with solder; as the header was originally soldered on the other side of the board, they may not be. So give ’em a blob of solder.
Then remove as much solder as you can from the holes. You need it to be as tidy as possible, every through-hole tinned, but as clean as possible:
Next step: on your PCB, tin all the pads with the thinnest layer of solder you can. Then put a slightly larger blob of solder on diagonally opposite pads. Try and make them nicely rounded blobs, as they’ll act like locating pins:
Which means when you press the transceiver board down in position over the pads, it’ll locate itself perfectly squarely, with all the pads (hopefully) lining up perfectly between the two boards:
Touch your soldering iron to the two corner pads to melt the blobs and fix your boards together:
You should be able to look through the remaining holes to see whether they’re lining up perfectly with the pads underneath, then just fill the remaining holes with solder so they connect all the way through.
It’s a balancing act – you want enough solder to fill the hole and join up with the tinned pad on the board underneath, but you don’t want so much solder that it creates shorts inbetween the two boards.
Result: two boards stuck together, without the extra height the header would create:
Check the connections with a multimeter before powering it up, just to make sure there are no shorts. If there are (which has only happened once out of the few dozen times I’ve done this), you can separate the boards by melting the solder, pad by pad, working a bit of folded-over kapton tape between them to keep them from reconnecting as they cool. Then clean up the two boards and have another go.
It may seem like a complicated way to avoid drilling eight little holes, but once you’ve done it a few times it’s surprisingly quick and easy and it knocks quite a few millimetres off the height of the board.
Finished PCB in position (note to future self: gotta find a way of moving those electrolytic caps off to the side of the board next time):
Final result: a wireless LCD readout for my solar panels:
Note: this is a hacky and, arguably, utterly unnecessary technique, but I like it, so there 🙂
A friend of mine has had his recording studio fitted out with custom built furniture and wanted a smart control panel to give quick access to some bits and pieces.
I laser cut a piece of smoked acrylic and mounted some sexy (and surprisingly cheap) illuminated switches from eBay. The decals on the switches are laser printed onto OHP film, cut out, and fitted inside the keycaps:
I soldered up the connections on the back of them… lots of wires. Each switch is illuminated, so it’s 4 connections per switch, 32 in total. The last thing you want is wires hanging loose, even though this panel will be mounted in a box, but I’ve seen some neat ways of “cable lacing” to tie them all together into a neat loom. Wikipedia has some nice pictures you can use as a guide; usually waxed cotton is used (I’m guessing the wax stops the knots from falling apart) but it turns out dental floss works just as well:
At the bottom of the panel, I’ve put the studio logo, all nicely backlit. The actual backlight is an old iPod display – take it apart carefully and you can remove the LCD glass, leaving you with a nice, evenly lit rectangle.
To create the logo, I laser printed several copies of the studio logo onto acetate, then carefully lined them up on top of each other and stuck them together with a little spray mount. Why multiple copies? If you try doing this with a single laser/acetate, you’ll find the black parts of the print aren’t dense enough to block the light properly. Layering up a few copies builds up the contrast, so you don’t end up with a glowing rectangle with the logo on it. Then I stuck it to the front of the iPod backlight, and glued it to the back of the acrylic panel:
The buttons are connected to a Teensy LC running in keyboard emulation mode, so they just trigger keypresses on the Mac the panel’s connected to.
Looks pretty smart in situ (that’s it just over the keyboard):
Warning: it’s awful. It has worked, and may still work, but it’s offered more for entertainment than anything else. There are several thousand hard-coded gotchas, and a pervading bad code smell in just about every file. But hey – it’s the biggest project I’ve tried putting together, and my first in Swift. And it’s not like there’s a standalone app you can run and play with; you need the hardware (which has similarly unpleasant firmware) in order to stand a chance of getting it working.
But I’ll do what I can to clean it up, document it etc. in the coming weeks.
Most swear boxes work the wrong way round: you say something naughty, then you have to put money in the box as penance. This is the opposite – press the button and this device generates a random swear word for you to use in conversation at your leisure. Technology, eh.
It’s not a new idea: a few years ago I came across this beautiful “Four Letter Word” clock made with delicious old fashioned nixie tubes:
Designed and built by Jeff Thomas, Peter Hand, and Juergen Grau. More information here.
I wanted to make something quick and simple as a birthday present for a friend, and I had four little LED starburst displays sitting in a box, so some sort of random word dispenser seemed like a good idea. Designed a quick circuit in Eagle:
The schematic in Eagle is pretty messy and tangled. I often find that if I’ve got loads of connections to make to a microcontroller, it’s not until I’m laying out the board design that I can see the way things should be connected.
This is the perfect example: I’m connecting two identical starburst displays to the microcontroller in a multiplexed fashion. In an ideal world you’d connect the anodes of the displays together, segment A to segment A, seg B to seg B and so on, leaving just the cathodes of each digit to be connected separately. This makes the software a little simpler – pin D3 (say) on the microcontroller controls the same segment on all the digits:
On a single sided PCB, though, routing the connections like that gets really tricky – you end up with loads of connections that have to jump over others so they end up on the right pins.
The alternative is to design the circuit so it’s easy to route (even if that means segment A on one display is connected to segment F on the other etc) and then sort it all out in the software.
Hence the relatively neat looking PCB design:
Note that there’s an error in the design / layout – I didn’t realise until too late that two of the microcontroller pins I wanted to use (A6 and A7) wouldn’t work as digital pins, so I had to use some wires to connect them to some free pins on the other side of the controller. Live and learn.
With the PCB milled out and populated, I hacked a rectangular hole out of a random wooden box I had knocking about, and stuck a switch and a button on it, and squished a bit of foam and a CR123A battery holder in.
CR123A batteries are great for this sort of project – they only run at 3 volts, but that’s enough to drive LED displays without needing to add current limiting resistors. Helps that the display is multiplexed, too; though it looks like all the displays are lit up simultaneously, they’re actually taking turns, one digit at a time. Stops the LEDs from burning out – at most they’re on for 25% of the time.
The software’s pretty simple – there’s a list of about 45 swear words chosen at random. Proper randomness isn’t easy for a microcontroller (or a computer) to do on its own, so I measure how long the user has pressed the button, in microseconds, and use that value to help choose a word.
Press the button and an animation sweeps across the display, with each letter of the word coming up one at a time like a fruit machine.
I’ve always wanted to play with a motion control rig. Ever since seeing a behind-the-scenes documentary about Star Wars, showing how they filmed the Death Star trench scenes: a huge model, with a robotic camera that could do all the flying shots over and over again perfectly so they could film all the individual elements and have them match up.
Robotic stuff was in its infancy back then; their robotic camera wasn’t controlled by computers, just lots of TTL logic chips wire-wrapped together with loads of knobs and switches to set speeds and design trajectories.
Motion control has come a long way since John Dykstra and the team built those first systems. Nowadays there are more competing systems than you can shake a stick at. But they’re all expensive. Way out of my range. The only way I could afford one is if I wanted to turn it into a business, do mo-co day in and day out, but I need more variety than that.
So I’ve built one. It’s mostly made of junk, and it’s got its limitations and quirks, but it’s mine. And it sort of works. Muhahaha.
This is the story.
First step in any robot is getting motors to do what you want. So I pulled apart an old ink-jet printer and a disco light and set about connecting it up to a computer. I made a silly video:
All seems a bit pointless, but the aim was to see if I could learn enough electronics and coding to get a computer to “play” a motion sequence back on a set of motorised things. And it worked.
At the end of the film, you can see the camera mounted in the yoke of an old disco light, panning and tilting; but what you don’t see is the very first snag I hit. If I tried filming something with the camera while it was being moved around, vibrations and wobbles from the disco light motors made the footage unusable. Disco lights don’t need particularly smooth motion; the motors and gears had been designed more for high speed moves.
So it was back to the drawing board. I needed to build my own camera mount, and motorise it. One key find was a big moving-head stage light, which had some huge pulleys in it – and when paired up with a motor with a tiny pulley it meant I could do much smoother (albeit slower) moves.
I went through several iterations:
By now I had a better idea of what sort of functionality I wanted. On the mechanical side, I wanted my rig to have:
– a pan and tilt head
– some kind of slider so the camera could actually move / translate in space, rather than being stuck in one place
– focus control
– a separate turntable accessory so I could rotate objects in front of the camera
On the computer side, I wanted software that could handle:
– an arbitrary number of axes of motion so I could add new stuff in the future
– manual control, so you could drag sliders around on-screen to move the robot
– kinematic limiters (so if you dragged a slider faster than the actual robot could move, it wouldn’t burn itself up trying to match your speed)
– easy trajectory design (ideally using Blender, my 3D software of choice)
I learnt how to write Mac apps in Swift – look up the courses on iTunes U, they’re great – and managed to cobble together something that worked:
I always knew I was going to need focus control of some sort. Not much point being able to have my camera move around if it couldn’t keep its subject in focus. Traditionally this is done externally to the lens: professional cine lenses all have gear teeth running round the focus ring so you can drive the focus remotely, either mechanically with a flexible shaft with a knob on the end (a follow focus) or with a motorised gear controlled down a wire.
But I’ve already got good DSLR lenses and I want to use them. And they’re autofocus lenses so they’re already motorised – all the hard mechanical work has already been done. But how to control them? I could always open them up and hook a motor driver up directly to the tiny motor inside them. My better lenses use ultrasonic motors, though, which may need rather exotic drivers to make them run. And I don’t know what sort of feedback mechanism the lenses use to track how far the motor has run or what distance the lens is focussed on.
A nicer option is to just pretend to be the camera. Rather than hacking the lenses up and removing their micro controllers, just communicate with them the same way a camera would.
I connected it up to an Arduino, and started poking commands at it to see what happened.
Success! Just getting the lens to move at all without the camera feels like a triumph, but there’s a long way to go.
The nice thing about controlling lenses like this is that you can do all your testing without opening a lens up at all. All communications happen via a set of contacts on the back of the lens, and you can buy macro extenders that fit between a camera and a lens, and have contacts on each side to keep the two connected. I bought a cheap one off eBay, and stripped it down, removing the camera-side contacts. The contacts on the lens-side are spring-loaded, so I soldered a wire to each spring before re-assembling it:
Now it’ll fit onto the back of any Canon lens and let you command it:
The downside is that when you want to actually film something, you now have a macro extender between the lens and the camera, which changes the focus range of the lens significantly. Great for ultra-closeups (well, it is a macro extender) but no good for anything more than a foot or two away. And some lenses are unusable with an extender on; their focus range ends up being pretty much inside the lens itself.
Autofocus lenses and focus distance
One of the big issues with trying to control a lens like this is that it’s not what the lens was designed for. I want to be able to say “set your focus to exactly 300mm…. now pull back to 297mm …” etc. But these lenses are designed for auto-focus, which means neither they nor the camera ever need to know the distance they’re focussing on, just that the subject is in focus. The lens can step the focus forwards or backwards by tiny discrete amounts – steps – but doesn’t need to know what effect it’s had on the focus distance. And the camera is only interested in the sharpness of the subject (which it can measure by looking at the highest frequency in the data coming in).
So it’s normally more a hunt than anything else. The camera looks at the image coming in, decides which way out of focus it is and roughly how much, then tells the lens “focus 20 steps towards infinity”. Then it looks at the image now coming in, and corrects and fine tunes the focus: “3 steps inwards”, “ok, just 1 step back out again”. Smart look-up tables for each lens give the camera help in picking the right number of steps to use for each guess, but it’s an iterative process. Distance measurement doesn’t come into it.
(That’s why some pro camera flashes have an AF illuminator – usually red – that projects a striped pattern onto the subject for a moment before the camera takes the picture. It gives the camera some detail to look at and help it assess focus. Point an autofocus camera at something with not much visible detail (say, a sheet) and it’ll struggle to focus. Note that some cameras, particularly camcorders, may have an infra-red distance detection device to make a quick estimate of the distance first, to speed up the first coarse focus move, but the fine tuning is always done “visually”)
So: autofocus lenses shift focus in steps, but I need to know distances. To make things more awkward, each lens has different mechanics and motors and therefore the number of steps, and how many steps a lens takes to shift the focus by an inch all vary widely. And the relationship between steps and actual focus distances is distinctly non-linear, too (the hint is that word “infinity”) so while 10 steps at the near end of the lens’s range may shift the focus by a few millimetres, at the far end 10 steps may shift it by 100 metres.
All this means look-up tables are the way to go. I can tell the lens to pull the focus all the way in to its closest focus, then have it move out by 10 steps at a time, and measure and record the distance that’s in focus.
So I printed out a nice sharp focussing chart (just a nice sharp picture to focus on), set the camera up on a tripod, and hooked up an external monitor to it so I could check the focus without needing to squint through the viewfinder. Then, tape measure in hand, stepped through the first lens’s range of focus, moving the focussing chart back and forward in front of the camera until it was at it’s best focus, and noting the distance each time.
But it wasn’t working properly.
For this to work, I need to know that if I tell a lens “move focus 10 steps forward” then “move focus 10 steps back again” it’ll end up focussing on the same spot it started at. But the first lens I tried profiling didn’t do that. And while telling it to reset to its closest focus always brought it to the same spot, if I stepped it forward by 100 steps, it ended up at a different position than if I stepped it forward by 10 steps ten times in a row. Like the lens was losing steps. Not good.
The lenses do kind of keep track of their current position, but even that’s not without its issues. All autofocus lenses have a manual option; a focus ring you can grab with your paw and twist to focus. Some lenses have a mechanical switch that disengages the autofocus motor, but some have a slipping clutch that lets you manually focus at the same time as the motor (which probably annoys the camera). But this means the lens can only tell you how many steps cumulatively it’s tried to move the focus since it was powered on, but it has no idea if that’s where the lens focus actually is right now.
Bit of a show-stopper for me, though. The only way I can track where the lens is, is by dead-reckoning; moving the lens to a known point (the nearest focus point), resetting my counter, then just blindly sending commands and keeping track of how many steps back and forth I’ve asked it to do.
Thought I might as well try a better lens, just to see.
My EF-S 17-55mm IS USM lens worked perfectly. As long as I didn’t touch the manual focus ring, I could send it off to a hundred different points of focus, and back to a point near an end-stop (but not actually *on* the endstop, as that’s the one place I know any lens will be consistent), and the lens came back to the right spot every time. Well, within a gnat’s whisker of it.
So: my posh expensive “ultrasonic motor” lens worked. We’re back on track. I tested my other lenses; for the most part, the more expensive, the better. There are different flavours of ultrasonic motor. The better ones are ring-type, where the motor is in the form of a ring that surrounds the focus elements inside the lens, and it more or less directly drives the elements with a minimum of extra gears and cogs. There are cheaper lenses around that get to say they’re ultrasonic although internally they’re very similar to a normal lens: they have a tiny motor (albeit ultrasonic) positioned to the side of the focus elements, with lots of gears and a rack and pinion affair to actually drive the elements.
I now have a lens I can control the focus and iris on, but if I want to get at the contacts on it wile it’s mounted on a camera, I have to use the macro extender. Not ideal. I want to be able to use the lens’s normal focus range. So. (gulp) Time to open it up, remove the contacts so the camera can’t interfere with my plans, solder some wires to the right bits inside the lens, and drill a hole in the lens for the wires to come out. Drilling a hole in an £600 lens… still, can’t make an omelette etc etc
Rather than cut a circuit out I hooked up an old arduino to act as an interface between the computer and the lens. It uses the same protocol as my motor drivers; it receives a list of positions (in steps – I do the conversion from real-world focus distances to focus motor steps on the host computer) and a clock signal, and it just plays through the sequence, sending a new command to the lens every 25th of a second. The additional PCB attached to the Arduino is a DC converter to provide an additional 7.2V supply to the lens:
I set up a quick camera move in Blender, and had Blender calculate the focus distance for each frame along with the motion data. It’s supposed to be focussing on the lego man, but when I measured the scene up to recreate in Blender, I tape-measured to the top of the tripod he’s standing on instead. After shooting the motion, I dropped the video into After Effects and superimposed a star-field over the top to see how well the camera’s actual motion matched the trajectory I’d created. Hence the dots.
I haven’t gone into the actual EF protocol at all here, and it’s not without its quirks. I’ll write an article on it at some point. Google will get you started, though.
Also, I had to do some hacky coding to deal with the fact that EF lenses tend to ignore commands if they’re busy. An ignored command is a show-stopper: all moves from then on will be wrong. There are different kinds of “busy” as far as the lens is concerned, though, and it may report itself as not busy (i.e. ready for commands) when it’s in the middle of moving the lens, and then it’ll ignore the new lens move command. So if you’re following in my footsteps, you’ll need to send the bytes 0x90, 0xB9, 0x00 to the lens, and if the byte that comes back in response to that final 0x00 is either 4 or 36, the lens is moving. So wait a bit and try again, otherwise your command will be acknowledged but ignored.
In the bad old days, I chucked some solar panels on our extension roof, ran some cables to the attic, and had this sort of mess going on: Cheapo MPPT solar battery charger connected to lots of old car batteries.
Not pretty, but it let me wire the house for 12v, and fit lots of lighting and stuff that wasn’t reliant on the grid. Finally got round to tidying it up last year… Continue reading “More solar stuff”→
No CGI! All captured with my zany mo-co contraption on a Canon 7D, then comped together in After Effects.
Got the camera to do 4 passes:
1. Normal speed to capture the hand at the start
2. Frame by frame to do the stop-motion chips
3. Normal speed again to capture the hand at the end
4. Normal speed, all lights off apart from one, and a spray of stage smoke (you can see it at the top of the frame)