Hyperspectral images (HSIs) provide invaluable information in both spectral and spatial domains for image classification tasks. In this paper, we use semantic representation as a middle-level feature to describe image pixels' characteristics. Deriving effective semantic representation is critical for achieving good classification performance. Since different image descriptors depict characteristics from different perspectives, combining multiple features in the same semantic space makes semantic representation more meaningful. First, a probabilistic support vector machine is used to generate semantic representation-based multifeatures. In order to derive better semantic representation, we introduce a new adaptive spatial regularizer that well exploits the local spatial information, while a nonlocal regularizer is also used to search for global patch-pair similarities in the whole image. We combine multiple features with local and nonlocal spatial constraints using an extended Markov random field model in the semantic space. Experimental results on three hyperspectral data sets show that the proposed method provides better performance than several state-of-the-art techniques in terms of region uniformity, overall accuracy, average accuracy, and Kappa statistics.