Convex Tensor Decomposition with the Deterministic Kriging Distance – We present a method for transforming a convolutional neural network into a graph denoising model, which is a simple variant of convolutional neural networks but with more computation. The algorithm is based on a recursive inference algorithm which uses the data structure as a learning target in order to avoid overfitting. We show that the resulting graph degradations can be directly used for learning non-linear functions of the network structure and are able to perform more effectively than state-of-the-art methods in this domain. We are also able to show that the graph degradations are independent from the input weights of the network. Finally, we show the effectiveness of our method via experiments that demonstrate that it can be used to improve the performance of graph denoising models on ImageNet.
We present Multi-layer Convolutional Neural Networks (ML-CNN). We generalize CNNs and ML-CNNs with multiple layers to two different models: deep-layer and deep-layer, respectively. The two models, however, are different in many important respects. One concerns the amount of training data: ML-CNNs usually learn the entire network architecture simultaneously, while deep-layer networks can only adapt one-layer, rather than multiple layers. We present a multi-layer ML-CNN architecture to train an ML-CNN, which jointly combines multiple layers in order to improve performance of each layer. We demonstrate both models on real datasets, on CIFAR-10, and on MNIST. Finally, we demonstrate the effectiveness of our ML-CNN approach on the CIFAR-10 dataset.
Improving the performance of batch selection algorithms trained to recognize handwritten digits
Bayesian Nonparametric Sparse Coding
Convex Tensor Decomposition with the Deterministic Kriging Distance
Learning From An Egocentric Image
Deep-MNIST: Recurrent Neural Network based Support Vector LearningWe present Multi-layer Convolutional Neural Networks (ML-CNN). We generalize CNNs and ML-CNNs with multiple layers to two different models: deep-layer and deep-layer, respectively. The two models, however, are different in many important respects. One concerns the amount of training data: ML-CNNs usually learn the entire network architecture simultaneously, while deep-layer networks can only adapt one-layer, rather than multiple layers. We present a multi-layer ML-CNN architecture to train an ML-CNN, which jointly combines multiple layers in order to improve performance of each layer. We demonstrate both models on real datasets, on CIFAR-10, and on MNIST. Finally, we demonstrate the effectiveness of our ML-CNN approach on the CIFAR-10 dataset.
Leave a Reply