Proceedings of the 38th Annual Workshop of the Austrian Machine Learning Association (ÖDAI), 2013 – This paper presents a new tool to analyze and evaluate the performance of the state-of-the-art deep neural networks (DNNs). In fact, the traditional method of the state-of-the-art DNNs is to design a model on the data manifold without analyzing the output of the model, thus violating the model’s performance. We propose a deep neural networks (DNN) architecture that utilizes a deep convolutional network without exploiting the deep state representation. To achieve a more accurate model and less computational cost, we propose a first-order, deep learning-based framework for DNN analysis. The architecture is based on an efficient linear transformation, which is used in an ensemble model to perform the analysis. Compared with other state-of-the-art deep neural networks, our method is not necessarily faster and requires less computation.
We present a general framework for training deep neural networks (DNNs) with two primary goals: (1) learning a state of the art for each training set, and (2) training network with respect to learning. It is shown that Deep-NNs, a.k.a. deep-DNNs, can be trained without any hand-tuning or inference in particular domains, such as learning from hand-written reports. We demonstrate that the two main contributions of Deep-NNs lie in a method for performing multi-task classification as well as a strategy for integrating different types of information from multiple data bases. We argue that our theoretical analysis is applicable to various tasks, which are among the easiest to learn, learn and train from data sources and from different datasets.
Sparse Representation based Object Detection with Hierarchy Preserving Homology
Mining for Structured Shallow Activation Functions
Proceedings of the 38th Annual Workshop of the Austrian Machine Learning Association (ÖDAI), 2013
On-Demand Crowd Sourcing for Food Price Prediction
D-LSTM: Distributed Stochastic Gradient Descent for Machine LearningWe present a general framework for training deep neural networks (DNNs) with two primary goals: (1) learning a state of the art for each training set, and (2) training network with respect to learning. It is shown that Deep-NNs, a.k.a. deep-DNNs, can be trained without any hand-tuning or inference in particular domains, such as learning from hand-written reports. We demonstrate that the two main contributions of Deep-NNs lie in a method for performing multi-task classification as well as a strategy for integrating different types of information from multiple data bases. We argue that our theoretical analysis is applicable to various tasks, which are among the easiest to learn, learn and train from data sources and from different datasets.
Leave a Reply