Convolutional Networks on Graphs for Learning Molecular Fingerprints
We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.
https://github.com/HIPS/neural-fingerprint
https://arxiv.org/abs/1509.09292
#Paper
@BLI_Channel
We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.
https://github.com/HIPS/neural-fingerprint
https://arxiv.org/abs/1509.09292
#Paper
@BLI_Channel
GitHub
GitHub - HIPS/neural-fingerprint: Convolutional nets which can take molecular graphs of arbitrary size as input.
Convolutional nets which can take molecular graphs of arbitrary size as input. - HIPS/neural-fingerprint
PhD Position in Machine Learning and Deep Learning:
https://ellis.eu/news/ellis-phd-program-call-for-applications-deadline-november-15-2021
#PhD
@BLI_Channel
https://ellis.eu/news/ellis-phd-program-call-for-applications-deadline-november-15-2021
#PhD
@BLI_Channel
European Lab for Learning & Intelligent Systems
ELLIS PhD Program: Call for applications
The ELLIS mission is to create a diverse European network that promotes research excellence and advances breakthroughs in AI, as well as a pan-European PhD program to educate the next generation of AI researchers. ELLIS also aims to boost economic growth…
A biologically plausible neural network for Slow Feature Analysis
Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA repro- duces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli.
https://github.com/flatironinstitute/bio-sfa
https://arxiv.org/abs/2010.12644
#Paper
@BLI_Channel
Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA repro- duces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli.
https://github.com/flatironinstitute/bio-sfa
https://arxiv.org/abs/2010.12644
#Paper
@BLI_Channel
GitHub
GitHub - flatironinstitute/bio-sfa: Code for reproducing the experiment from the paper "A biologically plausible neural network…
Code for reproducing the experiment from the paper "A biologically plausible neural network for Slow Feature Analysis" - GitHub - flatironinstitute/bio-sfa: Code for reproducing t...
PhD Position in Quantum Machine Learning:
https://www.academictransfer.com/en/303263/phd-candidate-or-postdoctoral-researcher-in-quantum-machine-learning/
#PhD
@BLI_Channel
https://www.academictransfer.com/en/303263/phd-candidate-or-postdoctoral-researcher-in-quantum-machine-learning/
#PhD
@BLI_Channel
AcademicTransfer
PhD Candidate or Postdoctoral Researcher in Quantum Machine Learning
Are you intrigued by Machine Learning? As a PhD Candidate/Postdoctoral Researcher you will conduct independent research on the application of quantum formalism to develop novel algorithms for machine learning and/or to find new computational models …
PonderNet: Learning to Ponder
In standard neural networks the amount of computation used is directly proportional to the size of the inputs, instead of the complexity of the problem being learnt. To overcome this limitation we introduce PonderNet, a new algorithm that learns to adapt the amount of computation based on the complexity of the problem at hand. PonderNet requires minimal changes to the network architecture, and learns end-to-end the number of computational steps to achieve an effective compromise between training prediction accuracy, computational cost and generalization. On a complex synthetic problem, PonderNet dramatically improves performance over previous state of the art adaptive computation methods by also succeeding at extrapolation tests where traditional neural networks fail. Finally, we tested our method on a real world question and answering dataset where we matched the current state of the art results using less compute.
https://deepmind.com/research/publications/2021/Ponder-Net
https://arxiv.org/abs/2107.05407
#Paper
#DeepMind
@BLI_Channel
In standard neural networks the amount of computation used is directly proportional to the size of the inputs, instead of the complexity of the problem being learnt. To overcome this limitation we introduce PonderNet, a new algorithm that learns to adapt the amount of computation based on the complexity of the problem at hand. PonderNet requires minimal changes to the network architecture, and learns end-to-end the number of computational steps to achieve an effective compromise between training prediction accuracy, computational cost and generalization. On a complex synthetic problem, PonderNet dramatically improves performance over previous state of the art adaptive computation methods by also succeeding at extrapolation tests where traditional neural networks fail. Finally, we tested our method on a real world question and answering dataset where we matched the current state of the art results using less compute.
https://deepmind.com/research/publications/2021/Ponder-Net
https://arxiv.org/abs/2107.05407
#Paper
#DeepMind
@BLI_Channel
Deepmind
Ponder Net
In standard neural networks the amount of computation used is directly proportional to the size of the inputs, instead of the complexity of the problem being learnt. To overcome this limitation we introduce PonderNet, a new algorithm that learns to adapt…
PsiPhi: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
We study reinforcement learning (RL) with no-reward demonstrations, a setting in which an RL agent has access to additional data from the interaction of other agents with the same environment. However, it has no access to the rewards or goals of these agents, and their objectives and levels of expertise may vary widely. These assumptions are common in multi-agent settings, such as autonomous driving. To effectively use this data, we turn to the framework of successor features. This allows us to disentangle shared features and dynamics of the environment from agent-specific rewards and policies. We propose a multi-task inverse reinforcement learning (IRL) algorithm, called inverse temporal difference learning (ITD), that learns shared state features, alongside per-agent successor features and preference vectors, purely from demonstrations without reward labels. We further show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $\Psi \Phi$-learning (pronounced `Sci-Fi'). We provide empirical evidence for the effectiveness of $\Psi \Phi$-learning as a method for improving RL, IRL, imitation, and few-shot transfer, and derive worst-case bounds for its performance in zero-shot transfer to new tasks.
https://deepmind.com/research/publications/2021/PsiPhi-Reinforcement-Learning-with-Demonstrations-using-Successor-Features-and-Inverse-Temporal-Difference-Learning
https://arxiv.org/abs/2102.12560
#Paper
#DeepMind
@BLI_Channel
We study reinforcement learning (RL) with no-reward demonstrations, a setting in which an RL agent has access to additional data from the interaction of other agents with the same environment. However, it has no access to the rewards or goals of these agents, and their objectives and levels of expertise may vary widely. These assumptions are common in multi-agent settings, such as autonomous driving. To effectively use this data, we turn to the framework of successor features. This allows us to disentangle shared features and dynamics of the environment from agent-specific rewards and policies. We propose a multi-task inverse reinforcement learning (IRL) algorithm, called inverse temporal difference learning (ITD), that learns shared state features, alongside per-agent successor features and preference vectors, purely from demonstrations without reward labels. We further show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $\Psi \Phi$-learning (pronounced `Sci-Fi'). We provide empirical evidence for the effectiveness of $\Psi \Phi$-learning as a method for improving RL, IRL, imitation, and few-shot transfer, and derive worst-case bounds for its performance in zero-shot transfer to new tasks.
https://deepmind.com/research/publications/2021/PsiPhi-Reinforcement-Learning-with-Demonstrations-using-Successor-Features-and-Inverse-Temporal-Difference-Learning
https://arxiv.org/abs/2102.12560
#Paper
#DeepMind
@BLI_Channel
Google DeepMind
Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science...
The objective of this edited book is to reach academic scientists, researchers, data scientists, and scholars to exchange and share experiences and research outcomes in various domains to develop cost-effective, intelligent, and adaptive models that handle various challenges in the field of decision making and to help researchers carry it forward to a next level. The book incorporates the advances of machine intelligent techniques in the decision-making process and its applications. It also provides a premier interdisciplinary platform for scientists, researchers, practitioners, and educators to share their thoughts in the context of recent innovations, trends, developments, practical challenges, and advancements in the field of data mining, machine learning, soft computing, and decision science. It addresses recent developments and applications in machine learning. It also focuses on the usefulness of applied intelligent techniques in the decision-making process in several ways.
#Book
@BLI_Channel
#Book
@BLI_Channel
Conference Schedule:
AAAI 2022 (abstract): 1 day left!
AAAI 2022 (paper): 10 days left!
ICLR 2022 (abstract): 31 days left!
ICLR 2022 (paper): 38 days left!
CVPR 2022 (abstract): 73 days left!
CVPR 2022 (paper): 80 days left!
#Conference
@BLI_Channel
AAAI 2022 (abstract): 1 day left!
AAAI 2022 (paper): 10 days left!
ICLR 2022 (abstract): 31 days left!
ICLR 2022 (paper): 38 days left!
CVPR 2022 (abstract): 73 days left!
CVPR 2022 (paper): 80 days left!
#Conference
@BLI_Channel
PhD Position in Natural Language Processing / Machine Learning:
https://www.kth.se/en/om/work-at-kth/lediga-jobb/what:job/jobID:426117/where:4/
#PhD
@BLI_Channel
https://www.kth.se/en/om/work-at-kth/lediga-jobb/what:job/jobID:426117/where:4/
#PhD
@BLI_Channel
www.kth.se
KTH | Vacancies at KTH
KTH jobs is where you search for jobs at www.kth.se.
The Master's program Translational Neuroscience introduces itself and shows what study content and focus can be expected in the Master's program.
https://www.youtube.com/watch?v=-Fz6PyvjvI8
@BLI_Channel
https://www.youtube.com/watch?v=-Fz6PyvjvI8
@BLI_Channel
YouTube
HHU - Master Translational Neuroscience
Ein Video der Heinrich-Heine-Universität Düsseldorf
Bachelor bestanden - und was danach? Wie wär's mit dem Studiengang Master Translational Neuroscience!
Aktuell 2021 vom Medienlabor der HHU produziert, erfährst du hier schnell auf den Punkt das Wichtigste…
Bachelor bestanden - und was danach? Wie wär's mit dem Studiengang Master Translational Neuroscience!
Aktuell 2021 vom Medienlabor der HHU produziert, erfährst du hier schnell auf den Punkt das Wichtigste…
Internship position in the Max Planck Institutes for Biological Cybernetics and Intelligent Systems:
The sciences of biological and artificial intelligence are rapidly growing research fields that need enthusiastic minds with a keen interest in solving challenging questions. The Max Planck Institutes for Biological Cybernetics and Intelligent Systems offer students at the Bachelor or Master level paid internships during the summer of 2022. The CaCTüS Internship is aimed at young scientists who are held back by personal, financial, regional or societal constraints to help them develop their research careers and gain access to first-class education. The program is designed to foster inclusion, diversity, equity and access to excellent scientific facilities. We specifically encourage applications from students living in low- and middle-income countries which are currently underrepresented in the Max Planck Society research community. Successful applicants will work with top-level researchers on cutting-edge research projects for three months.
https://www.projects.tuebingen.mpg.de
@BLI_Channel
The sciences of biological and artificial intelligence are rapidly growing research fields that need enthusiastic minds with a keen interest in solving challenging questions. The Max Planck Institutes for Biological Cybernetics and Intelligent Systems offer students at the Bachelor or Master level paid internships during the summer of 2022. The CaCTüS Internship is aimed at young scientists who are held back by personal, financial, regional or societal constraints to help them develop their research careers and gain access to first-class education. The program is designed to foster inclusion, diversity, equity and access to excellent scientific facilities. We specifically encourage applications from students living in low- and middle-income countries which are currently underrepresented in the Max Planck Society research community. Successful applicants will work with top-level researchers on cutting-edge research projects for three months.
https://www.projects.tuebingen.mpg.de
@BLI_Channel
Ph.D position in Max Planck Institute for Biological Cybernetics:
https://www.mpg.de/17785617/phd-and-postdoc-positions-chronobiology-psychology-vision-science-human-neuroscience
@BLI_Channel
https://www.mpg.de/17785617/phd-and-postdoc-positions-chronobiology-psychology-vision-science-human-neuroscience
@BLI_Channel
Forwarded from 🧠کارگروه مدلسازی شناختی و هوش مصنوعی(NBML) (دبیرکارگروه)
سلام به همگی
دوستان برنامه بعدی کارگروهمون⤵️
"مدلسازی زمان پاسخ: چرا و چگونه؟"
سخنران: امیرحسین هادیان
🗓پنجشنبه: ۲۳ دی ماه
⏰ساعت: ۱۷:۳۰ الی ۱۹
به امید دیدارتون🌹
https://nbml.ir/FA/events/The-18th--Interdisciplinary-Seminar-of-Iran%E2%80%99s-Brain-Mapping-Student-Branch
دوستان برنامه بعدی کارگروهمون⤵️
"مدلسازی زمان پاسخ: چرا و چگونه؟"
سخنران: امیرحسین هادیان
🗓پنجشنبه: ۲۳ دی ماه
⏰ساعت: ۱۷:۳۰ الی ۱۹
به امید دیدارتون🌹
https://nbml.ir/FA/events/The-18th--Interdisciplinary-Seminar-of-Iran%E2%80%99s-Brain-Mapping-Student-Branch
PhD Position in Amsterdam University on Causality for Domain Adaptation and Optimization:
https://vacatures.uva.nl/UvA/job/PhD-Position-in-Causality-for-Domain-Adaptation-and-Optimization/738606902/
@BLI_Channel
https://vacatures.uva.nl/UvA/job/PhD-Position-in-Causality-for-Domain-Adaptation-and-Optimization/738606902/
@BLI_Channel