Recommended Papers for Review (articles listed on top of each bullet point are review papers and bottom are regarding specific topics)

You can choose from the list below, or search other ones that suits your interest and related to the course topic. Consult with the lead instructor if any questions.

  • Collaborative Robotics:
    • Cherubini, A. and Navarro-Alarcon, D., 2021. Sensor-based control for collaborative robots: Fundamentals, challenges, and opportunities. Frontiers in Neurorobotics, p.113.
    • Gualtieri, L., Rauch, E. and Vidoni, R., 2021. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robotics and Computer-Integrated Manufacturing67, p.101998.
    • Peshkin, M.A., Colgate, J.E., Wannasuphoprasit, W., Moore, C.A., Gillespie, R.B. and Akella, P., 2001. Cobot architecture. IEEE Transactions on Robotics and Automation17(4), pp.377-390.
    • Gillespie, R.B., Colgate, J.E. and Peshkin, M.A., 2001. A general framework for cobot control. IEEE Transactions on Robotics and Automation17(4), pp.391-401.
    • Djuric, A.M., Urbanic, R.J. and Rickli, J.L., 2016. A framework for collaborative robot (CoBot) integration in advanced manufacturing systems. SAE International Journal of Materials and Manufacturing9(2), pp.457-464.
    • Hong, D.K., Hwang, W., Lee, J.Y. and Woo, B.C., 2017. Design, analysis, and experimental validation of a permanent magnet synchronous motor for articulated robot applications. IEEE Transactions on Magnetics54(3), pp.1-4.
  • Robot Learning
    • Lake, B.M., Ullman, T.D., Tenenbaum, J.B. and Gershman, S.J., 2017. Building machines that learn and think like people. Behavioral and brain sciences40, p.e253.
    • Peters, J., Lee, D.D., Kober, J., Nguyen-Tuong, D., Bagnell, J.A. and Schaal, S., 2016. Robot learning. Springer Handbook of Robotics, pp.357-398.
    • Ravichandar, H., Polydoros, A.S., Chernova, S. and Billard, A., 2020. Recent advances in robot learning from demonstration. Annual review of control, robotics, and autonomous systems3, pp.297-330.
    • Sünderhauf, N., Brock, O., Scheirer, W., Hadsell, R., Fox, D., Leitner, J., Upcroft, B., Abbeel, P., Burgard, W., Milford, M. and Corke, P., 2018. The limits and potentials of deep learning for robotics. The International journal of robotics research37(4-5), pp.405-420.
    • Billard, A. and Kragic, D., 2019. Trends and challenges in robot manipulation. Science364(6446), p.eaat8414.
    • Nguyen, H. and La, H., 2019, February. Review of deep reinforcement learning for robot manipulation. In 2019 Third IEEE International Conference on Robotic Computing (IRC) (pp. 590-595). IEEE.
    • Kroemer, O., Niekum, S. and Konidaris, G., 2021. A review of robot learning for manipulation: Challenges, representations, and algorithms. The Journal of Machine Learning Research22(1), pp.1395-1476.
    • Mason, M.T., 2018. Toward robotic manipulation. Annual Review of Control, Robotics, and Autonomous Systems1, pp.1-28.
    • Zhu, J., Cherubini, A., Dune, C., Navarro-Alarcon, D., Alambeigi, F., Berenson, D., Ficuciello, F., Harada, K., Kober, J., Li, X. and Pan, J., 2022. Challenges and outlook in robotic manipulation of deformable objects. IEEE Robotics & Automation Magazine29(3), pp.67-77.
    • Feng, Z., Hu, G., Sun, Y. and Soon, J., 2020. An overview of collaborative robotic manipulation in multi-robot systems. Annual Reviews in Control49, pp.113-127.
    • Zhu, Z. and Hu, H., 2018. Robot learning from demonstration in robotic assembly: A survey. Robotics7(2), p.17.
    • Kober, J., Bagnell, J.A. and Peters, J., 2013. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research32(11), pp.1238-1274.
    • Connell, J.H. and Mahadevan, S. eds., 2012. Robot learning (Vol. 233). Springer Science & Business Media.
    • Howe, R.D., 1993. Tactile sensing and control of robotic manipulation. Advanced Robotics8(3), pp.245-261.
    • Thrun, S. and Mitchell, T.M., 1995. Lifelong robot learning. Robotics and autonomous systems15(1-2), pp.25-46.
    • James, S., Ma, Z., Arrojo, D.R. and Davison, A.J., 2020. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters5(2), pp.3019-3026.
    • Liu, Z., Liu, Q., Xu, W., Wang, L. and Zhou, Z., 2022. Robot learning towards smart robotic manufacturing: A review. Robotics and Computer-Integrated Manufacturing77, p.102360.
    • Schaal, S. and Atkeson, C.G., 2010. Learning control in robotics. IEEE Robotics & Automation Magazine17(2), pp.20-29.
    • Proceedings of Machine Learning Research – Conference on Robot Learning Series: 2022 | 2021 | 2020 | 2019 | 2018 | 2017
    • Fan, L., Zhu, Y., Zhu, J., Liu, Z., Zeng, O., Gupta, A., Creus-Costa, J., Savarese, S. and Fei-Fei, L., 2018, October. Surreal: Open-source reinforcement learning framework and robot manipulation benchmark. In Conference on Robot Learning (pp. 767-782). PMLR.
    • Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V. and Levine, S., 2018, October. Scalable deep reinforcement learning for vision-based robotic manipulation. In Conference on Robot Learning (pp. 651-673). PMLR.
    • Redmon, J., Divvala, S., Girshick, R. and Farhadi, A., 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
    • He, K., Gkioxari, G., Dollár, P. and Girshick, R., 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969).
    • Qi, C.R., Yi, L., Su, H. and Guibas, L.J., 2017. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems30.
    • Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M. and Solomon, J.M., 2019. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog)38(5), pp.1-12.
    • Kipf, T., Van der Pol, E. and Welling, M., 2019. Contrastive learning of structured world models. arXiv preprint arXiv:1911.12247.
    • Sitzmann, V., Martel, J., Bergman, A., Lindell, D. and Wetzstein, G., 2020. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems33, pp.7462-7473.
    • Florence, P.R., Manuelli, L. and Tedrake, R., 2018. Dense object nets: Learning dense visual object descriptors by and for robotic manipulation. arXiv preprint arXiv:1806.08756.
    • Kulkarni, T.D., Gupta, A., Ionescu, C., Borgeaud, S., Reynolds, M., Zisserman, A. and Mnih, V., 2019. Unsupervised learning of object keypoints for perception and control. Advances in neural information processing systems32.
    • Lee, M.A., Zhu, Y., Srinivasan, K., Shah, P., Savarese, S., Fei-Fei, L., Garg, A. and Bohg, J., 2019, May. Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA) (pp. 8943-8950). IEEE.
    • Jonschkowski, R., Rastogi, D. and Brock, O., 2018. Differentiable particle filters: End-to-end learning with algorithmic priors. arXiv preprint arXiv:1805.11122.
    • Karkus, P., Hsu, D. and Lee, W.S., 2018, October. Particle filter networks with application to visual localization. In Conference on robot learning (pp. 169-178). PMLR.
    • Xiang, Y., Schmidt, T., Narayanan, V. and Fox, D., 2017. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199.
    • Wang, He, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J. Guibas. “Normalized object coordinate space for category-level 6d object pose and size estimation.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2642-2651. 2019.
    • Schmidt, T., Newcombe, R.A. and Fox, D., 2014, July. DART: Dense Articulated Real-Time Tracking. In Robotics: Science and systems (Vol. 2, No. 1, pp. 1-9).
    • Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A. and Torr, P.H., 2016. Fully-convolutional siamese networks for object tracking. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part II 14 (pp. 850-865). Springer International Publishing.
    • Jayaraman, D. and Grauman, K., 2018. Learning to look around: Intelligently exploring unseen environments for unknown tasks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1238-1247).
    • Agrawal, P., Nair, A.V., Abbeel, P., Malik, J. and Levine, S., 2016. Learning to poke by poking: Experiential learning of intuitive physics. Advances in neural information processing systems29.
    • Schulman, J., Levine, S., Abbeel, P., Jordan, M. and Moritz, P., 2015, June. Trust region policy optimization. In International conference on machine learning (pp. 1889-1897). PMLR.
    • Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., Kumar, V., Zhu, H., Gupta, A., Abbeel, P. and Levine, S., 2018. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905.
    • Deisenroth, M.P., Rasmussen, C.E. and Fox, D., 2011. Learning to control a low-cost manipulator using data-efficient reinforcement learning. Robotics: Science and Systems VII7, pp.57-64.
    • Hafner, D., Lillicrap, T., Ba, J. and Norouzi, M., 2019. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603.
    • Ross, S., Gordon, G. and Bagnell, D., 2011, June. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 627-635). JMLR Workshop and Conference Proceedings.
    • Pan, Y., Cheng, C.A., Saigol, K., Lee, K., Yan, X., Theodorou, E. and Boots, B., 2017. Agile autonomous driving using end-to-end deep imitation learning. arXiv preprint arXiv:1709.07174.
    • Abbeel, P. and Ng, A.Y., 2004, July. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning (p. 1).
    • Ziebart, B.D., Maas, A.L., Bagnell, J.A. and Dey, A.K., 2008, July. Maximum entropy inverse reinforcement learning. In Aaai (Vol. 8, pp. 1433-1438).
    • Ho, J. and Ermon, S., 2016. Generative adversarial imitation learning. Advances in neural information processing systems29.
    • Ghasemipour, S.K.S., Zemel, R. and Gu, S., 2020, May. A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning (pp. 1259-1277). PMLR.
    • Finn, C., Abbeel, P. and Levine, S., 2017, July. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning (pp. 1126-1135). PMLR.
    • Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D. and Lillicrap, T., 2016, June. Meta-learning with memory-augmented neural networks. In International conference on machine learning (pp. 1842-1850). PMLR.
    • Wang, R., Lehman, J., Clune, J. and Stanley, K.O., 2019. Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions. arXiv preprint arXiv:1901.01753.
    • Kulkarni, T.D., Narasimhan, K., Saeedi, A. and Tenenbaum, J., 2016. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. Advances in neural information processing systems29.
    • Xu, D., Nair, S., Zhu, Y., Gao, J., Garg, A., Fei-Fei, L. and Savarese, S., 2018, May. Neural task programming: Learning to generalize across hierarchical tasks. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3795-3802). IEEE.
    • Kaelbling, L. and Lozano-Perez, T., 2010, May. Hierarchical task and motion planning inthe now. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA.
    • Toussaint, M.A., Allen, K.R., Smith, K.A. and Tenenbaum, J.B., 2018. Differentiable physics and stable modes for tool-use and manipulation planning.
    • Edmonds, M., Ma, X., Qi, S., Zhu, Y., Lu, H. and Zhu, S.C., 2020, April. Theory-based causal transfer: Integrating instance-level induction and abstract-level structure learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 02, pp. 1283-1291).
    • Kurutach, T., Tamar, A., Yang, G., Russell, S.J. and Abbeel, P., 2018. Learning plannable representations with causal infogan. Advances in Neural Information Processing Systems31.
    • Ramos, F., Possas, R.C. and Fox, D., 2019. Bayessim: adaptive domain randomization via probabilistic inference for robotics simulators. arXiv preprint arXiv:1906.01728.
    • Müller, M., Dosovitskiy, A., Ghanem, B. and Koltun, V., 2018. Driving policy transfer via modularity and abstraction. arXiv preprint arXiv:1804.09364.
    • Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J.A. and Goldberg, K., 2017. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312.
    • Viereck, U., Pas, A., Saenko, K. and Platt, R., 2017, October. Learning a visuomotor controller for real world robotic grasping using simulated depth images. In Conference on robot learning (pp. 291-300). PMLR.
    • Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V. and Levine, S., 2018. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293.
    • Hwangbo, J., Lee, J., Dosovitskiy, A., Bellicoso, D., Tsounis, V., Koltun, V. and Hutter, M., 2019. Learning agile and dynamic motor skills for legged robots. Science Robotics4(26), p.eaau5872.
  • Machine Learning Basics
    • Backpropagation
      • Rumelhart, D.E., Hinton, G.E. and Williams, R.J., 1986. Learning representations by back-propagating errors. nature323(6088), pp.533-536.
    • Neural Networks
      • LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P., 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE86(11), pp.2278-2324.
      • Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2017. Imagenet classification with deep convolutional neural networks. Communications of the ACM60(6), pp.84-90.
      • Simonyan, K. and Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
      • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A., 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
      • He, K., Zhang, X., Ren, S. and Sun, J., 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
    • Sequential Data Processing
      • Graves, A. and Graves, A., 2012. Long short-term memory. Supervised sequence labelling with recurrent neural networks, pp.37-45.
  • Robot Learning
    • Robot Manipulation
      • Redmon, J. and Angelova, A., 2015, May. Real-time grasp detection using convolutional neural networks. In 2015 IEEE international conference on robotics and automation (ICRA) (pp. 1316-1322). IEEE.
      • Pinto, L. and Gupta, A., 2016, May. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In 2016 IEEE international conference on robotics and automation (ICRA) (pp. 3406-3413). IEEE.
      • Zeng, A., Yu, K.T., Song, S., Suo, D., Walker, E., Rodriguez, A. and Xiao, J., 2017, May. Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge. In 2017 IEEE international conference on robotics and automation (ICRA) (pp. 1386-1383). IEEE.
      • Ha, H. and Song, S., 2022, January. Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding. In Conference on Robot Learning (pp. 24-33). PMLR.
      • Chen, T., Xu, J. and Agrawal, P., 2022, January. A system for general in-hand object re-orientation. In Conference on Robot Learning (pp. 297-307). PMLR.
      • Yang, P.C., Sasaki, K., Suzuki, K., Kase, K., Sugano, S. and Ogata, T., 2016. Repeatable folding task by humanoid robot worker using deep learning. IEEE Robotics and Automation Letters, 2(2), pp.397-403.
      • Calandra, R., Owens, A., Jayaraman, D., Lin, J., Yuan, W., Malik, J., Adelson, E.H. and Levine, S., 2018. More than a feeling: Learning to grasp and regrasp using vision and touch. IEEE Robotics and Automation Letters, 3(4), pp.3300-3307.
      • Chu, F.J., Xu, R. and Vela, P.A., 2018. Real-world multiobject, multigrasp detection. IEEE Robotics and Automation Letters, 3(4), pp.3355-3362.
      • Yang, L., Wan, F., Wang, H., Liu, X., Liu, Y., Pan, J. and Song, C., 2020. Rigid-soft interactive learning for robust grasping. IEEE Robotics and Automation Letters, 5(2), pp.1720-1727.
      • Asif, U., Bennamoun, M. and Sohel, F.A., 2017. RGB-D object recognition and grasp detection using hierarchical cascaded forests. IEEE Transactions on Robotics, 33(3), pp.547-564.
      • Lenz, I., Lee, H. and Saxena, A., 2015. Deep learning for detecting robotic grasps. The International Journal of Robotics Research, 34(4-5), pp.705-724.
      • Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J. and Quillen, D., 2018. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International journal of robotics research, 37(4-5), pp.421-436.
      • Andrychowicz, O.M., Baker, B., Chociej, M., Jozefowicz, R., McGrew, B., Pachocki, J., Petron, A., Plappert, M., Powell, G., Ray, A. and Schneider, J., 2020. Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 39(1), pp.3-20.
      • Morrison, D., Corke, P. and Leitner, J., 2020. Learning robust, real-time, reactive robotic grasping. The International journal of robotics research, 39(2-3), pp.183-201.
      • Fang, K., Zhu, Y., Garg, A., Kurenkov, A., Mehta, V., Fei-Fei, L. and Savarese, S., 2020. Learning task-oriented grasping for tool manipulation from simulated self-supervision. The International Journal of Robotics Research, 39(2-3), pp.202-216.
    • Robot Sim2Real Transfer
      • Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W. and Abbeel, P., 2017, September. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 23-30). IEEE.
      • Bousmalis, K., Irpan, A., Wohlhart, P., Bai, Y., Kelcey, M., Kalakrishnan, M., Downs, L., Ibarz, J., Pastor, P., Konolige, K. and Levine, S., 2018, May. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 4243-4250). IEEE.
      • Tanwani, A., 2021, October. DIRL: Domain-invariant representation learning for sim-to-real transfer. In Conference on Robot Learning (pp. 1558-1571). PMLR.
    • Robot Locomotion & Autonomous Driving
      • Zhou, M., Luo, J., Villella, J., Yang, Y., Rusu, D., Miao, J., Zhang, W., Alban, M., Fadakar, I., Chen, Z. and Huang, A.C., 2020. Smarts: Scalable multi-agent reinforcement learning training school for autonomous driviZhou, M., Luo, J., Villella, J., Yang, Y., Rusu, D., Miao, J., Zhang, W., Alban, M., Fadakar, I., Chen, Z. and Huang, A.C., 2020. Smarts: Scalable multi-agent reinforcement learning training school for autonomous driving. arXiv preprint arXiv:2010.09776.ng. arXiv preprint arXiv:2010.09776.
      • Agarwal, A., Kumar, A., Malik, J. and Pathak, D., 2022. Legged locomotion in challenging terrains using egocentric vision. arXiv preprint arXiv:2211.07638.
      • Distributed multi-robot collision avoidance via deep reinforcement learning for navigation in complex scenarios
    • General Policy/Skill Learning
      • Ahn, M., Brohan, A., Brown, N., Chebotar, Y., Cortes, O., David, B., Finn, C., Gopalakrishnan, K., Hausman, K., Herzog, A. and Ho, D., 2022. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691.
      • Huang, K., Hu, E.S. and Jayaraman, D., 2022. Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning. arXiv preprint arXiv:2212.08961.
      • Rozo, L., Calinon, S., Caldwell, D.G., Jimenez, P. and Torras, C., 2016. Learning physical collaborative robot behaviors from human demonstrations. IEEE Transactions on Robotics, 32(3), pp.513-527.