khalido
2/28/2018 - 11:45 AM

fastai 2018 course notes

fastai 2018 course notes

Notes for fast.ai's deep learning course, part 1, 2018 edition.

Lesson 1: Cats & Dogs

Lesson 1, wiki

  • why fast.ai?
    • fastai is top down, starts with code which does stuff, then slowly peels it back.
    • recommends getting to the end of all lessons then going through again as many times as needed to spend more time on things missed, rather than going slow.
  • so first up, setup a gpu machine to run code on

Once setup, update the OS and fastai repo:

# update the machine OS
sudo apt update
sudo apt upgrade
conda install -c conda-forge jupyterlab # install jupyterlab

# make sure the fastai git  repo is up to date
git pull # run this inside the fastai folder
conda env update # updates the fastai env
conda activate fastai # activates fastai env
  • Lesson 1: Image classification with Convolutional Neural Networks

    • first up, we learn how to classify images, with both single and multi-label ones.
    • stochastic gradient descent with restarts (SGDR) lowers the learning rate as the model learns, with periodic ‘jumps’ to ensure it doesn’t get stuck in a local minima
    • the fast.ai library has a lr.find() method to help find an optimal learning rate. This starts at a very low lr and keeps increasing it until the loss stops decreasing.
    • augment data with the tfms_from_model() method, with the transformations to perform passed in with aug_tfms=
    • homework: recreate my own versions of the fastai notebooks
  • note: the fastai library is frequently updated so git pull it from github instead of installing it using pip. to use the fastai library in my own github repo,create a symlink from the folder containing jupyter notebooks to the library like so ln -s /path/to/fastai/fastai and import things as per the fastai notebooks.

  • deep learning

    • A lot of things are very hard, like figuring out cancer cells in pathology slides.
    • you needed a lot of domain expertise and smart computer programmers to code algorithims and so on, and it took years of work
    • what DL models give us is a infinitely flexible function which can learn features/parameters directly from data in a fast and scalable way. Its fits these parameters to the data by using a loss function - often gradient descent, which modifies the parameters so they better predict our data
    • in a ways, for certain classes of problems ML is a [universal approximation function]
    • this is all made possible becuase of cheap GPUs, which are super fast at crunching numbers
    • Google is using deep learning everywhere