Capturing body language
Project overview | Eye candy | Related groups | |
Project overview | |
For this project, supported by the Ford Motor Company, we are developing a library of posture and gesture data that can help us to better understand potential interactions between vehicles and pedestrians. The work is motivated, in particular, with the increasing sophistication with which vehicle-based sensor platforms can image and tag signals of human activity as they traverse urban spaces. In building the data-set, we will rely on motion capture of body language data, which allows us to acquire very high-resolution positions for key sequences of body language, which we can then process and map to synthetic computational rigs. In this way, these data can be subsumed into our other modeling platforms for what-if simulation and visualization.
|
|
Eye candy | |
Motion capture data for pedestrian movement, retargeted to a synthetic character rig for body language analysis and tagging of activities.
|
|
Related groups | |
|
|
Robot motion control |
|
Human behavior in critical scenarios |
|
Modeling riots |
|
A toolkit for measuring sprawl
|
|
Simulating crowd behavior |
|
Wi-Fi geography |
|
Simulating sprawl |
|