muscle-face

What is the Middlebury Motion Capture Lab?

It’s a fully functioning motion capture lab space to facilitate class use and experiments in interdisciplinary and interdepartmental projects and research for faculty, staff and students here at Middlebury College. It was launched in Fall 2016 by dance professor and lab director Scotty Hardwig with sponsorship from the Dance Program and the Fund for Innovation (FFI) as a laboratory for exploring the potential of new digital tools that might provide an integration point between the arts, sciences, humanities and athletics through the digital study of human motion.

Equipped with the hardware (full-body motion capture suit with gyroscopic sensors) and software (a dedicated computer with motion capture software and animation programs) to produce high-end avatarial animation and live-capture digital animations, this lab would provide a space for kinetic learning and creative applications across the disciplines of dance, film, music, computer programming, and animation. This lab space also serves as a potential future platform for virtual reality tracking, experiments in video game design, KINECT skeletal tracking, or any number of cutting edge technological interfaces between technology and the body.

It’s also our goal for this lab space to be a connective zone between different disciplines and practices on campus and in the Middlebury community through performances, research projects, or experiments that involve embodied experience and digital tracking.

mesh-gray

 

What is Motion Capture?screen-shot-2016-09-23-at-4-19-17-pm

The motion capture suit is a state of the art wireless motion capture system in the form of a lycra strap-based bodysuit with attached gyroscopic sensors (a kind of
weight-based robot sensor) that positions each joint in the body in relationship to earth’s gravity, allowing for seamless real-time tracking of each body part in 3-dimensional space (X, Y, Z coordinates). These sensors then transmit digital data to a wireless receiver that processes it all within a MVN Studio software and produces an avatarial representation (an animated human character) of the person wearing the suit. With this sensor-generated data, there are an innumerable number of creative and learning potentials. This data could be used for anatomical sensing lessons, to research the proprioceptive responsiveness of a dancer to feel the passage of weight between joints; since this software also has the potential to track a dancer’s center of gravity in real time, this could also provide lessons in technical principles for a dancer as they transition their weight, or help them understand line and shape and the geometrics of phrase-based movement from a visual and physical perspective at once.

From the animation side of things, this computer-generated “body” can also be used to create avatars using software like Blender, 3ds Max, Maya, or Cinema 4D to choreograph full-length animated works from the movements of dancers or actors. Though originally designed for this kind of avatar creation, the live data produced from the movements of the body within the suit can also be used to create reactive live-animation displays for uses in concert dance or installation performances using projectors and live animation software like Isadora, eMotion, or Jitter. The data-points from each joint could also be used to control sound landscapes through Ableton Live, so that the dancing body becomes an instrument in a reactive electronic music show. Most recently, the developers of this hardware (Xsens) have been developing a tracking system that measures the weight load over the knee joint to help in injury prevention and athletic training. The possibilities of these digital tools are very exciting and practically endless, with far ranging possibilities within the fields of dance, film and media culture, computer programming, animation, sports medicine, anatomical research, music, and live performance displays.

screen-shot-2016-09-23-at-4-19-38-pm

 

One of our goals for this project that relates to assessment is how we might use this technology in new ways that expand upon its traditional uses or functions. While the Lab can be used for a certain number of high-tech applications within the fields of video game design and film, one of our research goals is to find new ways of using this technology to bring together different fields from across the Liberal Arts on campus that might provide a model for how high-tech lab spaces can serve interdisciplinary functions at a place like Middlebury College.

TRADITIONAL USES OF THIS TECHNOLOGY 

 

POTENTIAL NEW USES

 

Avatar design for movies and film Interdisciplinary research between Arts and Sciences in wide-ranging fields including Sports Medicine, Biology, and other hard sciences

 

Animation/avatar design video games

 

Interdisciplinary research in the Arts (dance/film/studio art/architecture)
  Providing new modes of live performance within the Arts, with interactive data / visuals / sound

 

  Providing new ways of creating resources for pedagogy, in the Arts, Sciences, and Humanities [like 3-D teaching tools, animated classroom environments, or avatar-based learning]

 

  Platform for Psychology studies that involve motion, perception, and space

 

  Platform for Sociology/Anthropology studies that deal with human motion / body socialization

 

  Integration with Computer Science (primarily Robotics, but also theoretical applications that relate to systems theory and AI)

 

  Avenues for theoretical research that deal with the digitization of the human body (digital performance research)

 

  Providing a model for the use of high-tech lab spaces for the facilitation of undergraduate student research and faculty research within the Liberal Arts setting