Although many notable improvements have been devoted to the semantic segmentation of laser scanning (LS) data, the extreme complexity of scanned scenes poses significant challenges in achieving the effective distribution of a category label per point. This study investigates the semantic segmentation of LiDAR point clouds using an improved deep learning method. In particular, the raw data were reorganized based on group proposals using Gaussian learning. We generated a structured multi-scale graph for group proposals, which supports multi-scale analysis in the scale space. Subsequently, a self-adaptive graph convolution network (GCN) was adopted to obtain the best point cloud features. Based on this GCN module, the proposals were semantically labeled by an encoder-decoder network. The proposed level inferences were finally transformed into point-wise predictions. For segmentation result refinement, the output probabilities of the proposed framework were weighted as the input of a developed conditional random field (CRF) algorithm. Experiments with three typical datasets (i.e., ParisLille-3D, Semantic3D, and vKITTI) comprehensively evaluated the performance of our approach. The experimental results demonstrated that the proposed framework can achieve better performance for several objects.