Kinect posture and gesture recognition

Application description

Innovative analysis methods applied to data extracted by off-the-shelf peripherals can provide useful results in activity recognition without requiring large computational resources. We propose a framework for automated posture and gesture detection, exploiting depth data from Microsoft Kinect. Novel features are:

  1. the adoption of Semantic Web technologies for posture and gesture annotation;
  2. the exploitation of non-standard inference services provided by an embedded matchmaker [1] to automatically detect postures and gestures.

Particularly, the recognition problem is handled as a resource discovery, grounded on a semantic-based matchmaking [2]. An ontology for geometry-based semantic description of postures has been developed and encapsulated in a Knowledge Base (KB), also including several instances representing pose templates to be detected. 3D body model data detected by Kinect are pre-processed on-the-fly to identify key postures, i.e. unambiguous and not transient body positions. They typically correspond to the initial or final state of a gesture. Each key posture is then annotated adopting standard Semantic Web languages grounded on Description Logics (DL). Hence, non-standard inferences allows to compare the retrieved annotations with templates populating the Knowledge Base and a similarity-based ranking supports the discovery of the best matching posture. The ontology further allows to annotate a gesture from its component key postures, in order to enable recognition of gestures in a similar way.

Kinect posture and gesture recognition

The framework has been implemented in a prototype and experimental tests have been carried out on a reference dataset. Results indicate good posture/gesture identification performance with respect to approaches based on machine learning.

Kinect posture and gesture recognition

  1. Real-time Kinect camera output with detected skeleton superimposed.
  2. Sematic annotation panel, with tree-like graphical representation of the reference ontology and annotation editing via drag-and-drop of classes and properties.
  3. Timeline with the sequence of recognized postures and gestures. They are processed by the embedded reasoner.
  4. Toolbar and settings.

Publications

Scientific publications about Kinect posture and gesture recognition



  1. M. Ruta, F. Scioscia, M. di Summa, S. Ieva, E. Di Sciascio, M. Sacco. Semantic matchmaking for Kinect-based posture and gesture recognition, international Journal of Semantic Computing, Volume 8, Number 4, page 491-514 - 2014.

  2. M. Ruta, F. Scioscia, M. di Summa, S. Ieva, E. Di Sciascio, M. Sacco. Body posture recognition as a discovery problem: a semantic-based framework. The 2014 International Conference on Active Media Technology (AMT'14), Volume 8610, page 160-173, August 2014.

  3. M. Ruta, F. Scioscia, M. di Summa, S. Ieva, E. Di Sciascio, M. Sacco. Semantic matchmaking for Kinect-based posture and gesture recognition. Eighth IEEE International Conference on Semantic Computing (ICSC 2014), page 15-22, June 2014.

References



Developed By
Logo SisInfLab Logo Poliba Logo SWoT