Mining deep features for accurate diagnosis of congenital abnormalities of retinal lens defects – We present the first ever study of the effect of the camera on the object detection. We study the effect of the camera in both the task of object detection and recognition. To this end, we present a new dataset that includes camera-detached objects and a novel object detection task. We use RGB-D image patches to detect objects with different shape, texture and appearance features and then exploit this information by projecting them onto camera-detached objects. We evaluate our approach using the PASCAL VOC dataset with and without features and show that our approach outperforms state-of-the-art methods.
We propose a deep generative model that can learn to produce a variety of emotions. Our model includes an external representation of the emotions of a given scene in which the emotions of human beings are encoded. To learn a complex emotion representation for the scenes, we combine human-level language and external representations of the emotions from the world as a generative model of the scene. By leveraging our generative model, we generate visual summaries of the emotion of human beings that we can then use to make predictions about the emotions of the human and the environment. The models use these summaries for the task of estimating human emotion recognition.
On the Effect of LQ-problems in Machine Learning: A General Investigation
The Evolutionary Optimization Engine: Technical Report
Mining deep features for accurate diagnosis of congenital abnormalities of retinal lens defects
Stochastic optimization via generative adversarial computing
Learning a Human-Level Auditory Processing UnitWe propose a deep generative model that can learn to produce a variety of emotions. Our model includes an external representation of the emotions of a given scene in which the emotions of human beings are encoded. To learn a complex emotion representation for the scenes, we combine human-level language and external representations of the emotions from the world as a generative model of the scene. By leveraging our generative model, we generate visual summaries of the emotion of human beings that we can then use to make predictions about the emotions of the human and the environment. The models use these summaries for the task of estimating human emotion recognition.
Leave a Reply