Warning: Undefined array key 0 in /var/www/tgoop/function.php on line 65

Warning: Trying to access array offset on value of type null in /var/www/tgoop/function.php on line 65
79 - Telegram Web
Telegram Web
Convolutional Networks on Graphs for Learning Molecular Fingerprints

We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.

https://github.com/HIPS/neural-fingerprint
https://arxiv.org/abs/1509.09292

#Paper
@BLI_Channel
A biologically plausible neural network for Slow Feature Analysis

Learning
latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA repro- duces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli.

https://github.com/flatironinstitute/bio-sfa
https://arxiv.org/abs/2010.12644

#Paper
@BLI_Channel
PonderNet: Learning to Ponder

In standard neural networks the amount of computation used is directly proportional to the size of the inputs, instead of the complexity of the problem being learnt. To overcome this limitation we introduce PonderNet, a new algorithm that learns to adapt the amount of computation based on the complexity of the problem at hand. PonderNet requires minimal changes to the network architecture, and learns end-to-end the number of computational steps to achieve an effective compromise between training prediction accuracy, computational cost and generalization. On a complex synthetic problem, PonderNet dramatically improves performance over previous state of the art adaptive computation methods by also succeeding at extrapolation tests where traditional neural networks fail. Finally, we tested our method on a real world question and answering dataset where we matched the current state of the art results using less compute.

https://deepmind.com/research/publications/2021/Ponder-Net
https://arxiv.org/abs/2107.05407

#Paper
#DeepMind
@BLI_Channel
PsiPhi: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning

We study reinforcement learning (RL) with no-reward demonstrations, a setting in which an RL agent has access to additional data from the interaction of other agents with the same environment. However, it has no access to the rewards or goals of these agents, and their objectives and levels of expertise may vary widely. These assumptions are common in multi-agent settings, such as autonomous driving. To effectively use this data, we turn to the framework of successor features. This allows us to disentangle shared features and dynamics of the environment from agent-specific rewards and policies. We propose a multi-task inverse reinforcement learning (IRL) algorithm, called inverse temporal difference learning (ITD), that learns shared state features, alongside per-agent successor features and preference vectors, purely from demonstrations without reward labels. We further show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $\Psi \Phi$-learning (pronounced `Sci-Fi'). We provide empirical evidence for the effectiveness of $\Psi \Phi$-learning as a method for improving RL, IRL, imitation, and few-shot transfer, and derive worst-case bounds for its performance in zero-shot transfer to new tasks.

https://deepmind.com/research/publications/2021/PsiPhi-Reinforcement-Learning-with-Demonstrations-using-Successor-Features-and-Inverse-Temporal-Difference-Learning
https://arxiv.org/abs/2102.12560

#Paper
#DeepMind
@BLI_Channel
The objective of this edited book is to reach academic scientists, researchers, data scientists, and scholars to exchange and share experiences and research outcomes in various domains to develop cost-effective, intelligent, and adaptive models that handle various challenges in the field of decision making and to help researchers carry it forward to a next level. The book incorporates the advances of machine intelligent techniques in the decision-making process and its applications. It also provides a premier interdisciplinary platform for scientists, researchers, practitioners, and educators to share their thoughts in the context of recent innovations, trends, developments, practical challenges, and advancements in the field of data mining, machine learning, soft computing, and decision science. It addresses recent developments and applications in machine learning. It also focuses on the usefulness of applied intelligent techniques in the decision-making process in several ways.

#Book
@BLI_Channel
Conference Schedule:

AAAI 2022 (abstract): 1 day left!
AAAI 2022 (paper): 10 days left!
ICLR 2022 (abstract): 31 days left!
ICLR 2022 (paper): 38 days left!
CVPR 2022 (abstract): 73 days left!
CVPR 2022 (paper): 80 days left!

#Conference
@BLI_Channel
Internship position in the Max Planck Institutes for Biological Cybernetics and Intelligent Systems:

The sciences of biological and artificial intelligence are rapidly growing research fields that need enthusiastic minds with a keen interest in solving challenging questions. The Max Planck Institutes for Biological Cybernetics and Intelligent Systems offer students at the Bachelor or Master level paid internships during the summer of 2022. The CaCTüS Internship is aimed at young scientists who are held back by personal, financial, regional or societal constraints to help them develop their research careers and gain access to first-class education. The program is designed to foster inclusion, diversity, equity and access to excellent scientific facilities. We specifically encourage applications from students living in low- and middle-income countries which are currently underrepresented in the Max Planck Society research community. Successful applicants will work with top-level researchers on cutting-edge research projects for three months.

https://www.projects.tuebingen.mpg.de

@BLI_Channel
Forwarded from 🧠کارگروه مدلسازی شناختی و هوش مصنوعی(NBML) (دبیرکارگروه)
سلام به همگی
دوستان برنامه بعدی کارگروهمون⤵️

"مدلسازی زمان پاسخ: چرا و چگونه؟"

سخنران: امیرحسین هادیان

🗓پنجشنبه: ۲۳ دی ماه
ساعت: ۱۷:۳۰ الی ۱۹

به امید دیدارتون🌹

https://nbml.ir/FA/events/The-18th--Interdisciplinary-Seminar-of-Iran%E2%80%99s-Brain-Mapping-Student-Branch
2025/07/12 08:56:12
Back to Top
HTML Embed Code: