Are Deep Neural Networks Dramatically Overfitted?

If you are like me, entering into the field of deep learning with experience in traditional machine learning, you may often ponder over this question: Since a typical deep neural network has so many parameters and training error can easily be perfect, it should surely suffer from substantial overfitting. How could it be ever generalized to out-of-sample data points?

Tremendous piece by Lilian Weng about deep learning. If you ever wondered why and how deep learning works without hopelessly overfitting this article is for you. Includes a lot of references to interesting and current research.

Swift for Tensorflow

Swift for TensorFlow is a next-generation platform for machine learning, incorporating the latest research across: machine learning, compilers, differentiable programming, systems design, and beyond. This project is at version 0.2; it is neither feature-complete nor production-ready. But it is ready for pioneers to try it for your own projects, give us feedback, and help shape the future!

This is truly exciting, and sooner or later I have to brush up on my swift skills. Take a look at the video above and see what the future of ML research holds. Automatic differentiation looks amazing.
Robots in Space

The term “Artificial Intelligence (AI)” comprises all techniques that enable computers to mimic intelligence, for example, computers that analyse data or the systems embedded in an autonomous vehicle. Usually, artificially intelligent systems are taught by humans — a process that involves writing an awful lot of complex computer code.

Nice little overview of the current initiatives and projects regarding Artificial Intelligence in Space at ESA. Lot’s of interesting stuff going on there. I am personally super excited about autonomous robots and using deep learning for navigation and docking of spacecraft.

Could machine learning mean the end of understanding in science?

If prediction is in fact the primary goal of science, how should we modify the scientific method, the algorithm that for centuries has allowed us to identify errors and correct them?

Interesting piece by UofT’s Amar Vutha on how machine learning is reshaping the scientific landscape. I fundamentally disagree that the goal of science is predicting nature. Predicting is great for applied problems like weather-forecasting, but neglecting the understanding of things bears a great risk, because then the scientific method is basically reduced to guessing what the next step could be and is no longer effective at iterating towards a fundamental truth. With all due respect, let’s leave that “goal-oriented” approach to problem solvers and engineering.