Nearly everything that we do in the real world involves two hands, and so in this video we're gonna look a little bit deeper at the benefits of bimanual input in real world and what we can learn from that for digital interfaces. One extreme example of bimanual interaction would be rock climbing, like you see here. But it's just not rock climbing, we see the same thing driving a car. When you're driving, especially with a stick shift, you're four limbs are each doing different things. One hand's on the steering wheel, second hand's on the shifter, one of your feet controls the gas and the brake for velocity, and the other foot controls the clutch. Or playing music. It's a wonderful example of the virtuosity that's possible with manual interaction. And I think musical instruments offer an aspirational example for what might be possible in the future of digital interfaces. Some of the most interesting bimanual techniques in the physical world are ones where the two hands are coordinated. In these coordinated activities, the dominant hand does the fine motor motion and the non-dominant hand performs the supporting gross motor interactions. In fact, you saw when I pull off this pen cap here, that itself is a bimanual interaction. And often, as I was with the pen cap, you're not even aware that there are two hands that are coordinately playing a role. For example, with drawing, that can seem like a one handed task. Right, I'm just sketching. But if you watch people draw naturally. Like if I'm gonna draw this coffee cup here, I'll take my non-dominant hand to set the frame of the page. And that makes it a lot easier to be able to draw the things that I'd like to be able to draw. If by contrast I need to tape this down so that I couldn't set the frame, and I have to do it with just one hand. [SOUND] Then drawing that same coffee cup gets a lot slower and harder. Now, I'm not a great drawer, so you may not see this so much for me. But in my opinion, that is a less appetizing cup of coffee. You can see the same thing if we want to be able to move a bunch of objects. So here I've got several cards and I'll be able to say hello to everyone online. Two hands were heavily involved in that task. Now if I want to do that same thing. If I wanna do that same thing on this computer interface here, where I've got hello to everyone online. That was a much less felicitous task. This idea that there is a non-dominant hand that sets the frame and a dominant hand that does the fine motor work was articulated by Yves Guiard, and he described it as the kinematic chain. In particular because our non-dominant hand usually leads the interaction and the dominant hand follows. Here's another example is with eating, where if my non-dominant hand uses the fork. And that's gonna set the frame for my eating action. And then, the knife does the fine motor activity. Unless you're a drummer, if you flip these two so that you have to do the fine motor activity with your non-dominant hand, that gets a lot more difficult. But let's look at the computer here. So for typing, I've got two hands. It's a reasonable interface. Boom, boom, boom. Especially for a touch typer, it's pretty quick. There are a lot of cues that are involved so that my hands can both work without constant input from the eyes. The keys have edges to them, there are homing elements on these two keys right here. And so I can type pretty quickly with my two hands. But for things other than typing, our computers today are mostly using just one or maybe two fingers. There's a couple of domain exceptions. If I have something like Photoshop, I can use the non-dominant hand to set the tool, and then the non-dominant hand to do the fine motor work. Another example of a domain-specific interface is this video jog dial here, where the controller can be used by the non-dominant hand, again to set the tool or the frame, while the dominant hand can do the fine motor work. Final Scratch, as you see in this picture, is a DJ interface with physical records that have a digital code. It's also a wonderful example of bimanual interaction. But our poor computer, what does it think about us? If I have a normal menu-driven interaction where I've got a button or a few presses at a time. As far as my computer knows, I have like one finger. And you can see the mental model of what my computer thinks of me right here, wonderfully rendered by Dan O'Sullivan. Especially because this idea of bimanual interaction has been around for a long time. Doug Englebart was doing this in the 1960s. So were the early computer music pioneers, because they were drawing from physical instruments that were bimanual. So, here's an example of Doug Englebart's corded keyboard, that he showed in San Francisco, California in his 1968 mother of all demos. >> This device over here is unique to us, and we always have to justify and explain it. We'll do it in reverse order. We'll explain it first. [LAUGH] It provides for you the one hand equivalent of what you can do with a keyboard. There are five keys, and normally each finger sits on a key. And depressing any one key at a time produces a character. And any two keys at a time also. And in fact any combination of depressing, of which there are 31 combinations. >> Even now, many of Doug's ideas have become a part of the mainstream. But bi-manual interaction hasn't yet. Part of it is that while pixels can be repurposed and innovated by all developers, new hardware costs money and is generally part of the platform. Or, if it's part of software, and then you've gotta buy the software and the hardware together. And so, there are logistical impediments to hardware innovation in a way that there are fewer for software innovation. And what's exciting for me is that right now with the introduction of wearable computers, and increasing use of embedded sensors. With these new form factors of computing, I think there's greater innovation on all kinds of interface hardware showing up in the marketplace than maybe anytime prior. Interface hardware is an especially good example of what Bill Buxton calls the long nose of innovation. The research literature can contain seeds of ideas that take years or even decades to blossom. So if you look at these recent amazing new computing markets, like smartphones, or tablets, or wearables, the internet of things. And you're wondering where you can make your mark and have a big innovation. One great opportunity is to think about how you can leverage our richer bodily interactions like we see with drawing, with music, with other creative endeavors. Okay, so what's an example of something that would be really innovative and possible in the near future? One piece of work I really like a lot is by Ken Hinckley and his colleagues at Microsoft Research. And what he's looked at is how pen interactions, stylus interaction, plus touch interaction together can create a new vocabulary for user interfaces. And you can see an example in this short video here. >> We see an inevitable evolution of graphical user interfaces into new form factors with simultaneous pen and touch input. To explore the affordances of pen plus touch input on direct displays, we have implemented a prototype digital drafting table for the Microsoft Surface. We can robustly sense a custom IR pen as distinct from other multi-touch contact. By default, the pen writes. I can write while my hand rests on the display. But touch manipulates. For example, I can zoom the page with my non-preferred hand. I can flip between pages. I can move objects around. Or I can quickly grab a Post-It, but still write on it at a moment's notice. This has a surprising obviousness to it, but don't miss how pen and touch complement one another. There's a tremendous fluidity with which I can interleave annotation and other secondary tasks. >> And even further afield, here's one idea of something that might become part of our everyday interactions but could take a decade or even more. This video that you see here is researched by CNU professor Chris Harrison and colleagues on a system called Skinput that projects directly onto the body and enables more opportunities for thinking about, especially in a mobile setting, what kind of interactions can happen right on our bodily surfaces to be able to have an interactive experience. >> In this video, we present Skinput, a bioacoustic sensing technique that allows the body to be appropriated as an input surface. When a finger taps the skin, the impact creates an ensemble of useful acoustic signals. When slowed down 14 times, we can see transverse waves on the skin's surface. However, complex longitudinal waveforms also propagate through the body. To capture these signals, we developed a special purpose bioacoustic sensing array. Variations in bone density, size, and mass, as well as filtering effects from soft tissues and joints mean different locations are acoustically distinct. Software we developed listens for impacts and classifies them. Different interactive capabilities can be bound to different locations. Here we see a user playing a game of Tetris using their fingers as a control pad. >> And if you're interested in learning more, here's a couple articles to check out. Happy designing.