



Compared to the original GSTC, the proposed improved version of GSTC selects only a small number of relevant words for each topic and hence provides a more compact representation of topic-word relationships. Then, each video clip can be sparsely represented by a weighted sum of learned patterns which can further be employed in very large range of applications. For this purpose, based on optical flow features extracted from video clips, an improved Group Sparse Topical Coding (GSTC) framework is applied for learning semantic motion patterns. In this paper, an unsupervised method is proposed to automatically discover motion patterns occurring in traffic video scenes. Analyzing motion patterns in traffic videos can directly lead to generate some high-level descriptions of the
