This System Presents A New Video Motion Descriptor Based On A Multi-scale Video Segmentation To Provide A Multilayered Output As Well As Connections With The Rich Interactions That Occur Between Objects At The Semantic Level. We Also Put The Emphasis On Relationships Between Motion Clusters By Providing A New Relative Motion Descriptor Encapsulating Relative Motion Patterns Within A Local Spatio-temporal Neighborhood. Experimental Results On The Challenging TRECVID MED11 Event Recognition Dataset Validate The Approach.