Last active
January 20, 2018 22:31
-
-
Save rueberger/ed2a81323a8606462c1a to your computer and use it in GitHub Desktop.
VS265 project ideas
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I'm an undergrad in Physics interested in theoretical neuroscience and machine learning. | |
Some project ideas: | |
-Biology inspired music compression with Minimum Probability Flow learning and hopfield networks. | |
Image compression has been achieved with these techniques; adapting the | |
technique to music compression by mirroring the signal processing of the cochlea would be interesting. | |
Teammates with strong EECS backgrounds would be desired for this project. | |
I'd also be interested in using Minimum Probability Flow learning in other ways. It's a really efficient | |
learning rule that would make the training of really large networks tractable. | |
Minimum Probability Flow Learning: http://arxiv.org/abs/0906.4779 | |
Robust exponential storage in Hopfield networks: http://arxiv.org/abs/1206.2081 | |
- Hamiltonian Monte Carlo sampling in continuous time. | |
This would be an extension of a recent paper on an efficient way to sample without | |
detailed balance authored by Mayur and Jascha Sohl-Dickstein | |
http://arxiv.org/abs/1409.5191 | |
- Flatland inspired: exploring the properties of RNNs constrained so that they | |
can be embedded in a plane (http://en.wikipedia.org/wiki/Graph_embedding). | |
Connections between neurons in a 2D space cannot cross each other. | |
What sort of computations are possible when a network is constrained like this? | |
- Investigate network implementations of chess engine. | |
Modern chess engines use game trees and heuristics to play - very different from how humans play the game. | |
I'd like to create an engine that works in a more neurobiologically plausible manner. | |
This would probably involve unsupervised learning, sampling RNNs, and a feedforward classifying layer or two. | |
Let's have a chat if any of these ideas sound interesting to you |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment