Generalist probability theory and dynamic decision support systems – This paper presents a general framework for automatic decision making in the context of decision making in dynamic decision contexts. We formalise decision making as a set of distributed decision processes where the agents form their opinions and the actions taken are based on the decision process rules governing the decisions. We apply this framework to a variety of decision processes of non-smooth decision making as well as to decision and resource allocation.
This paper presents a new framework to jointly exploit the learned semantic structure of videos for classification of videos. Although many methods have been proposed to perform object segmentation with a high performance, no method has achieved the same performance of the same accuracy for the same amount of video. We first show how to build a convolutional neural network trained on the semantic structure of videos to classify videos. We then apply our method to an object segmentation task in which our model learns embeddings for videos, specifically, videos with hidden and non-hidden layers. These embeddings are learned by performing multi-label classification. Since the semantic structure of videos is a high-dimensional structure, our model learns to detect the segmentation of a video. Experimental results on the MNIST dataset demonstrate that our network outperforms state-of-the-art methods across the board, and is at least 50% better than baseline models.
Constrained Multi-View Image Classification with Multi-temporal Deep CNN Regressions
Learning and Estimation from Unbalanced Data
Generalist probability theory and dynamic decision support systems
Learning from Past Mistreatment
Morphon-based Feature SelectionThis paper presents a new framework to jointly exploit the learned semantic structure of videos for classification of videos. Although many methods have been proposed to perform object segmentation with a high performance, no method has achieved the same performance of the same accuracy for the same amount of video. We first show how to build a convolutional neural network trained on the semantic structure of videos to classify videos. We then apply our method to an object segmentation task in which our model learns embeddings for videos, specifically, videos with hidden and non-hidden layers. These embeddings are learned by performing multi-label classification. Since the semantic structure of videos is a high-dimensional structure, our model learns to detect the segmentation of a video. Experimental results on the MNIST dataset demonstrate that our network outperforms state-of-the-art methods across the board, and is at least 50% better than baseline models.
Leave a Reply