HPL EQUIPMENT AND CAPABILITIES


 

HPL Advanced Driving Simulator / STI Simulator Trainer / Eye Tracker & Other Equipment

 

 

HPL Advanced Driving Simulator

 

The centerpiece of the Human Performance Laboratory (HPL) is the Advanced Driving Simulator (Figure 1).  The simulator has been in operation at the Human Performance Lab since 1996.  It has been an integral part of dozens of research projects whose focus has included topics as varied as driver training and assessment, hazard anticipation, attention maintenance, decision making, driver distraction, cell phones, lane changing, pavement markings, traffic control sign and signal design (both on the road and in tunnels), advanced traveler information systems, forward collision warning systems, and in-vehicle music retrieval systems.  In November 2008, the software and computer hardware underwent a complete upgrade.  While the car and projection screens are the same as before, the new simulator software for scene and scenario development is part of a brand new Realtime Technologies Inc. (RTI) driving simulation platform – which has increased the fidelity of the vehicles, buildings and roadways of the virtual environment and significantly increased the lab’s data collection and analysis capabilities.

 

 

Figure 1:  HPL Advanced Driving Simulator

 

 

SIMULATOR HARDWARE:  The vehicle cab in the HPL Advanced Driving Simulator is a full sized Saturn sedan.  A driver operates the controls of the Saturn just as he or she would on the road.  The visual world is displayed on three screens, one in front of the car and two on each side.  Each screen subtends 60 degrees in the horizontal direction and 30 degrees in the vertical direction.  As the driver turns the wheel, brakes or accelerates, the roadway that is visible to the driver changes appropriately.  The images themselves are updated 60 times a second using a network of four advanced RTI simulator servers which parallel process the images projected to each of the three screens using high end multimedia video processors.  The image resolution on each screen can be as high as 1024 ×768.  The sound system for the simulator consists of three Logitec Dolby 2.1 Surround Sound speakers, two located on the left and right sides of the car and one, a sub-woofer, located in front of the car.  The system provides realistic road, vehicle, wind and other noises with appropriate direction, intensity and Doppler shift.

 

SIMULATOR SOFTWARE:  The simulator is run by four custom-built rack-mounted servers provided by RTI.  Each of three servers are responsible for processing visual database, dynamic objects and driver point-of-view and projecting these images onto the three simulator screens.  Three software packages make it possible to develop the virtual environment and program the driving scenarios.  A program called Sim Creator is the software responsible for the coordination of all aspects of the system.  Sim Creator takes inputs from the car (steering angle, throttle, brake, turn indicators, etc.) and the system (scripts, virtual database, scenario parameters, data files, scenario definitions) and outputs the appropriate scenes to the projectors in real time.  A second software package called Internet Scene Assembler (ISA) allows for the construction of virtual environments from a library of uploaded roadway tiles.  Custom vehicle paths and behaviors can be programmed using a combination of predefined sensor objects and JavaScript.  Both autonomous vehicles (random vehicles put in the environment and controlled by the computer based on desired traffic density) and scripted vehicles (specific vehicles placed by the programmer to behave in some predetermined way for each participant) can be programmed in ISA.  Vehicles in the virtual environment are very realistic in the way they move and behave.  Finally, custom geometry, road tiles and traffic paths can be built in a 3-D modeling program called Multigen Creator.  Multigen geometry and paths can then be uploaded directly to ISA as tiles for use in our simulations.  Sim Creator and ISA are both completely open source, allowing us unprecedented flexibility in the types of data we collect.  This flexibility also allows us to upgrade the system easily and/or incorporate new study-specific, custom-built hardware into the simulation system for testing purposes.

 

DATA COLLECTION & ANALYSIS:  The HPL Advanced Driver Simulator is extremely flexible in its ability to provide both common and uncommon types of data to the researchers using it.  Typical parameters such as throttle position, velocity, direction of travel, steering angle, lane position, braking,  and signaling are automatically recorded at up to 60 Hz (can be user specified), not only for the participant’s vehicle (ownship) but also for any vehicle specified by the experimenter.  In addition, video signals from the simulator are easily recordable on digital tape for later replay and analysis.  However, the system’s data collection capabilities go far beyond the most typically recorded parameters.  Because the system is open source, custom JavaScript code can be written to record just about any parameter required for a study.  For instance, if a study requires that the distance from ownship and a pedestrian about to step into the road be recorded, JavaScript code can be included in the model to record the distance between those two objects starting at some predefined point and ending after ownship has passed (or hit) the pedestrian.  As mentioned earlier, custom hardware can also be incorporated into the simulator with minimal effort for such things as testing displays, in vehicle devices (GPS, IPod, collision warning systems, etc.) and data from the hardware recorded along with the systems typical parameters.

 

HPL Driving Simulator Trainer

 

In 2009, the Human Performance Laboratory received a second driving simulator to complement our research in driver training human factors.  The new simulator is a Systems Technologies Inc. (STI) driving simulation system (Figure 2).  The new simulator will be primarily be used for driver training research.  The advantage of the STI system is that it is a very powerful driving simulator system for an affordable cost.  The system has an extremely large user base in the driver’s education, insurance, and training research community.  While our system will be somewhat upsized compared to the typical STI system in the field, the fact that our platform will operate using the

 

 

same software and basic hardware components means our research findings will be more easily generalized for use in the field and will greatly increase our ability to collaborate with other organizations using the same system for projects such as training research or large-scale field studies.  Sharing an STI system’s scenarios and visual databases is as easy as emailing a single file to another user. 

 

Figure 2: STI Simulator Hardware

 

SIMULATOR HARDWARE:  For our new system, a custom-built simulation platform is being constructed.  The new system will utilize an adjustable driver’s chair and steering/pedal console.  Three 60″ diagonal screens will be directly in front of the driver, subtending at least 160 degrees visual angle.  Roadway images will be projected to the screens using state-of-the-art short throw projectors which can be placed directly below the screens, eliminating the need for ceiling mounted projectors and allowing for a more compact simulator set up.  The sound system for the simulator will consist of three Logitec Dolby 2.1 Surround Sound speakers, two located behind the screens to the left and right of the participant, and a sub-woofer, located in front of the participant.

 

SIMULATOR SOFTWARE:  The STI simulator is operated using three high-end graphics computers operating in parallel.  The center channel computer controls vehicle and environmental dynamics as well as the view projected to the center screen.  The left and right channel computers control views on the left and right screens.  Images are projected at a resolution of 1024 × 768 and are refreshed at 60 Hz.  Tiles and scenarios are programmed directly in STI’s simulator software, STISIM Drive.  Unlike larger systems which require multiple software packages, everything from scenario development, visual database development, publishing, simulation execution and data collection are all done within STISIM Drive. 

 

DATA COLLECTION & ANALYSIS:  The STI simulator system automatically records the most common driving parameters such as throttle position, velocity, direction of travel, steering angle, lane position, braking, and signaling.  Data concerning the participant’s vehicle relative to other vehicles in the environment can also be recorded (such as following distance).  Events can be programmed and data relative to those events recorded.  Also, because the system was designed with training in mind, data relative to the “rules of the road” are recorded.  For instance, drivers may be “fined” for running stop signs or failing to use indicators during a turn.  Attention maintenance and secondary tasks can also be programmed in STISIM and that data recorded in the data file.  One of the most powerful features of the STI simulator is its ability to instantly replay drives immediately after a drive is concluded.  This is an extremely useful feature for training purposes, allowing for immediate feedback to be provided for the driver.

 

 

 

Eye Trackers & Other Equipment

 

The HPL also possesses a large array of other equipment for use in research studies.  The lab owns two eye tracking systems that are used for transportation studies.  The lab’s primary eye tracker for simulator and field studies is the ASL Mobile Eye eye tracking system.  Mobile Eye is an ultra-lightweight, highly portable system that can be used either in the simulator or in the field.  The system consists of a pair of goggles that contain miniaturized optics – a camera for viewing the eye, another for viewing the scene ahead, an ultraviolet light source, and a small reflective spectacle to allow the eye camera to see the eye without being directly in front of the participant’s eye (Figure 3).  The scene and eye data are recorded on a portable DVCR which can be clipped to the participant’s belt or placed on the passenger seat for later analysis.  Special software is used to overlay the driver’s calculated point of gaze on the video of the scene ahead (Figure 4).

 

 

Figure 3:  ASL Mobile Eye II Eye Tracker

 

 

 

Figure 4:  Crosshairs on Scene Output Indicate
Participant's Point of Glaze

The lab’s other transportation eye tracker is an ASL 5000 head mounted system (Figure 5).  Being another ASL system, the optics operate in much the same way.  However, the ASL 5000, rather than using a scene camera, uses the output of the simulator’s video signal as the scene.  Eye position is calculated using a combination of eye and head position, which is calculated using a magnetic head tracking system that is integrated with the simulator.  Eye point-of-gaze is then overlaid on the video of the simulator output.  The lab also owns a third eye tracking system for PC-based training studies (SMI eye tracker).

 

Figure 5:  ASL Headed Mounted System

 

 

 

The lab also has a large array of equipment for use in field studies.  The lab uses a Vericom system for recording the on-board diagnostics of vehicles in the field.  Data from other devices such as global positioning units or range finders can be recorded by the Vericom.  The lab has also developed a means of easily recording video in the field.  The lab’s “Four-Cameral Mobile Lab” system (Figure 6) can digitally record up to four video channels and contains mounting for three roof-mounted cameras and one head-mounted camera.  The system is configured to be installed in any vehicle within minutes and can be removed just as quickly. 

 

 

Figure 6: Four Camera Mobile Lab System