Skip to content

kristjankorjus/Replicating-DeepMind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Replicating-DeepMind

Reproducing the results of "Playing Atari with Deep Reinforcement Learning" by DeepMind. All the information is in our Wiki.

Progress: System is up and running on a GPU cluster with cuda-convnet2. It can learn to play better than random but not much better yet :) It is rather fast but still about 2x slower than DeepMind's original system. It does not have RMSprop implemented at the moment which is our next goal.

Note 1: You can also check out a popular science article we wrote about the system to Robohub.

Note 2: Nathan Sprague has a implementation based on Theano. It can do fairly well. See his github for more details.

About

Reproducing the results of "Playing Atari with Deep Reinforcement Learning" by DeepMind

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published