Nick Loman was needling Josh Quick via Twitter about whether Josh would set up a complex set of reactions: do it manually or program the robot:
@pathogenomenick @OmicsOmicsBlog @Scalene it's all about modular code (he says with no practical robot coding skills..,£— Ewan Birney (@ewanbirney) November 15, 2016
@ewanbirney @OmicsOmicsBlog @Scalene you would not believe how annoying robot programming is; it's like explaining to a dog how to make tea— Nick Loman (@pathogenomenick) November 15, 2016
@OmicsOmicsBlog @drckitty @pathogenomenick @ewanbirney @Scalene I think it's all just g-code under the hood— Justin Payne (@crashfrog) November 16, 2016
@crashfrog @OmicsOmicsBlog @drckitty @pathogenomenick @ewanbirney They all have some annoying GUI though— Josh Quick (@Scalene) November 16, 2016
Before I start, I really should cover my nearly non-existent bona fides. I haven't actually done any robot programming, but I did come close at the first starbase. We had purchased a Janus system from Perkin-Elmer (that's what they are now; could have been Caliper then). I figured learning how to program it would be a useful skill, so sat in on the training class. Then two things happened: I realized we were well covered with a senior research associate I had recruited in from the old Codon team, plus the two large bones in my left leg had a skiing-induced altercation that sent me to hospital and rehabilitation for a week. When I came back, my usual challenge with not bumping into things was now augmented with crutches, so I had no business hanging around an expensive robot in an increasingly cramped lab.@pathogenomenick @ewanbirney @OmicsOmicsBlog @Scalene our robot rep does our programming, it's awesome— Carly Rosewarne (@MicrobialMe) November 16, 2016
However, in that time I made a few observations that jibed with what I had seen at Codon. Observations that I suspect are still reasonable valid (well, I fear they are completely valid).
First, programming these robots correctly is far more complicated than I ever imagined. These would be a natural place for a good visual programming language, but I don't believe the vendor supplied one. That's problem number two: the command languages all seem to be proprietary. Each robot is different, but that's no excuse.
When I first started being interesting in programming, both parents taught me that computers do precisely what you tell them to, nothing more nothing less. Our robot, and I suspect the others, follows this maxim with painful excess. Not only do you need to program gross movements, but you need to calibrate them for the precise plasticware you are using; tiny differences between plates are huge differences for the robot. Mess this up, and your pipet tips will crash into the bottom of the plate or eject the liquid off-center in the well. I agree with the one tweet above that our local instrument rep was great at programming the robot, but that's not really a satisfactory process for a rapidly-changing research environment. If robots are going to contribute to a rapid idea-experiment cycle, easy (shall we say "fluid"?") programming is essential.
Worse, these expensive robots have absolutely no situational awareness, as they have nearly no senses with which to check their instructions. So if you load the wrong plasticware, or put the wrong fixture at the wrong location, or forget to actually put plates on the deck, the robot will merrily follow its instructions without any complaint (well, unless you cause a physical crash). About the only sensor I can remember on our Janus is a barcode reader, which was an extra. Painfully, that reader is on the deck, so extra steps must be used to employ it. A particularly unfortunate mistake would be to put the tip and plate dump chute in the wrong location, so that discards pile up on the deck. Far messier is to have it in the correct location, but without a correctly positioned and empty trash bin under its aim.
This is utterly bizarre in a world with self-driving cars. An obvious fix would be to put a small camera on each moving head, to actually scan the deck for correspondence with what the program is expecting. Something as simple as QR-coding each fixture would enable the robot to match physical fixture layout with virtual layout. Better yet, shouldn't the robot be able to determine if plates have actually been put in position? Precisely identifying plate types with just visuals (or perhaps augmented with ultrasound) is probably too far, but that doesn't mean the system couldn't catch simple errors, such as putting a 96-well plate when a 384-well is required. Of course, if you barcode the plates early with your own barcodes and have a good LIMS (allegedly these exist somewhere), the LIMS and robot could collaborate to enforce correct labware.
Why can't Janus join sisters Alexa, Cortana and Siri and have some smarts? "Janus, I want to replicate four plates with 10 microliters diluent added -- how do I set this up?" Programming the robot is a major barrier to using the robot; making it easier to run common situations should be a goal.
There's also a question of efficiency; many robot programs can accomplish the same task, but not all are equally quick or stingy with tips. We had a consultant, "The Robot Whisperer", who could significantly tighten up a program. Even our manufacturer's rep, who's a wizard with robot coding, is in awe of her skills. But again, scheduling a consultant can be a frustrating delay. With better higher level languages, shared across hardware, the equivalent of code optimizers could be built. These would probably be interactive to a degree, as sometimes an optimization will risk something important (such as contamination) or simply involve trading off time versus consumables.
I'd love to be wrong on these and have someone point out in the comments that there exist lab robots which incorporate these very features. I don't have much hope of being wrong, but for once that is my leaning. Liquid handlers are powerful instruments, but that power is diluted by the arcane nature of programming them.