Apple just bought itself a 3D sensor company, a move that has some intriguing possibilities.

This past weekend, Apple confirmed its acquisition of PrimeSense, an Israel-based company best known for its work on the original Microsoft Kinect, a gaming accessory that lets you control on-screen action by moving your body.

"I think it's very big news," says David Fleet, a computer science professor at the University of Toronto.

Fleet studies machine vision systems and cites the success of the Kinect as a "big win," adding "I think many more applications are on the horizon."

PrimeSense's 3D sensing system uses an infrared emitter to shoot out beams of invisible infrared light. A camera then measures how that light is distorted to detect depth and motion.

Part of the potential here is to create the kind of touchless, gesture-based computer interface familiar to anyone who's seen Tom Cruise in Minority Report or Robert Downey Jr. in Iron Man.

PrimeSense has demonstrated its 3D sensing technology on mobile devices, suggesting that 3D sensors could be built into future versions of Apple's iPad or iPhone.

David Fleet sees additional possibilities. "You can imagine that if every device could understand its environment, we would have smart appliances, smart vehicles, smart toys. [Computer vision] really is a fundamental source of information for just about anything you'd ever want to do."

Apple won't say exactly why it bought PrimeSense, but that hasn't stopped people from speculating. Rumours have long swirled around Apple's interest in the television market. Given PrimeSense's previous work with the Kinect (a living room device if there ever was one), it's not hard to see a fit there.

PrimeSense has also demonstrated its 3D sensing technology on mobile devices, suggesting that 3D sensors could be built into future versions of the iPad or iPhone.

PrimeSense is a big name in the 3D sensor space, but they're not the only one. There are other companies out there developing similar technology, like Leap Motion and PMD. It's worth noting that Leap Motion is working with computer manufacturers to build 3D sensors and touchless gesture recognition into laptops.

But here's the thing: though Iron Man and Minority Report make gesture-based computers look cool, wouldn't all that arm-waving result in tired limbs?

"Most studies to date have shown that gesture-based interfaces have not been very successful because people get tired quickly," says David Fleet.

While many people speak with their hands, he says, "We have yet to build a computer vision system or a depth sensing system that can perceive the subtlety and nuance of human gestures. So instead, we end up programming these systems which require very large scale, sweeping gestures of arms and hands, and it's just not particularly natural."

Fleet believes there's room for improvement, and that the biggest gains will come from better algorithms and clever software.

What's next?

So what's next for 3D sensors? I think there are three things to watch for.

First, better range. Current 3D sensor technology works well for living room-sized environments. Extending the range and fidelity of these systems is an area of active research.

Second, miniaturization. Physical size and power consumption are major factors, especially if we're talking about adding depth-sensing technology to mobile devices.

Finally, variety. I suspect this tech will start to appear in an increasing number of devices. That might mean additional living room applications. It might be smartphones or tablets.

And it's interesting to think about potential wearable applications - how a smart watch, or a head-mounted display like Google Glass could make use of additional information about the world around us.

Apple reportedly paid about $350 million US for PrimeSense. Clearly, it sees potential.