Team [I/O]3


[I/O]3 (Input/Output Cubed) is the next step in 3D modeling technology. Analogous to a drawing tablet, it aims to allow intuitive manipulation of virtual 3D objects on the computer, as well as display a volumetric image of those objects. User hand gestures control virtual object functions, like creation, translation, rotation, and scaling, as well as control functions, like saving a copy of the virtual model or undoing or redoing a change made on the model. This model will then be displayed on a volumetric display, allowing the user to visually examine their model as if it were a CNC machined or 3D printed before consuming any physical material. The volumetric display is a true, 3D representation of the virtual model utilizing persistence-of-vision, free of the characteristic distortions and limitations of common stereoscopic 3D displays. Combined with simple 3D modeling software, the hand-gesture tracking and true-3D volumetric display of [I/O]3 will provide users with a complete toolset to create, modify and save virtual 3D models intuitively, all while being able to view their creation as if it were a tangible object, without the time and material cost of physically building their model.


Presentations and reports related to our progress and future plans are available for download below:

PDR Slides
MDR Slides
MDR Report

The Team

(Pictured from left to right)
Professor Dennis Goeckel - team advisor
Shamit Som - volumetric sweeping device
Kevin Eykholt - custom 3D modeling software
Tom Finneran - software to projector interface
Chris Pitoniak - Leap Motion to software interface

Kevin Eykholt is a CSE who feels more at home in the virtual world, rather than the physical world. As a result, he is responsible for learning how to utilize JOGL to design the 3D modeling software. Although he has never used JOGL before, Kevin is able to quickly learn new skills, something that is necessary for a project as ambitious as this one.

Tom Finneran is a CSE and a hobbyist with an affinity for creating embedded systems projects, including some experience with making a persistence-of-vision display. Writing software to interface between the computer and the projector was a natural challenge for him to undertake.

Christopher Pitoniak is a CSE with a strong passion for software development. He takes great satisfaction in designing and creating new programs that can make everyday tasks more efficient and enjoyable. Therefore, the role of designing and implementing the 3D user input library was the perfect deliverable for him.

Shamit Som is the sole EE in the group, and has wide experience in hardware and software alike. His interest and prowess in hands-on work prove useful, if not essential, to the development of the mechanical portion of the project. Additionally, he has experience with computer 3D modeling in different contexts, ranging from CAD (exemplified by the design work done in SolidWorks on the sweep device for this project) to videogame object modeling (using 3DS Max and Milkshape). This experience allows the group to consider different approaches to implementing functions in the 3D modeling software and choose the best approaches accordingly.

Block Diagram and General Solution

The user's hand gestures are captured by a motion-tracking sensor, and sent to 3D modeling software, which will be running on a computer. The software will parse the input data and translate any recognized gestures into modeling functions. After obtaining the updated 3D model, the software will display it on the computer in traditional 2D representation, and prepare the data to be sent to the output subsystem by generating cross-sectional images of the 3D model data. The output subsystem consists of a custom-built volumetric sweep device and projector, along with an embedded system for control and synchronization. The sweep device will allow for volumetric images to be displayed, and the projector will be used to project these images onto the swept volume. The projector will receive the cross-sectional images from the software via HDMI video, and the embedded system will be used to synchronize the motion of the sweep device to the realtime cross-sectional image feed from the software to ensure the images are displayed at the correct time.