For all the self-taught geeks out there, here is our content library with most of the learning materials we have produced throughout the years.
It makes sense to start learning by reading and watching videos about fundamentals and how things work.
Data Science and Machine Learning - 16 wks
Full-Stack Software Developer - 16w
Search from all Lessons
Curated list of small interactive and incremental exercises you can take to get better at any coding skill.
Curated section of projects to build while learning with simple instructions, videos, solutions, and more.
Guides on different topics related to the technologies that we teach in our courses
Social & live learning
The most efficient way to learn: Join a cohort with classmates just like you, live streams, impromptu coding sessions, live tutorials with real experts, and stay motivated.
Calculus is a branch of mathematics that focuses on the concepts of derivatives and integrals, which are techniques that describe how functions change.
Calculus, linear algebra and probability are the foundations of Machine Learning. Learning about these topics will give us a deeper understanding of how models might work by definition and even allow us to develop new ones.
The derivatives (derivative) measure the rate of change. If we think of a function that describes the position of a moving object with respect to time, the derivative of that function would give us the velocity of the object at any given time.
The integrals (integrals) give us an accumulation of quantities. Continuing with the example of the moving object, if we had a function that describes the velocity of the object with respect to time, the integral of that function would tell us how far the object has traveled in a time interval.
Machine learning uses derivatives in optimization problems. Optimization algorithms such as gradient descent use derivatives to decide whether to increase or decrease weights to maximize or minimize some objective (e.g., the accuracy of a model or error functions). Derivatives also help us approximate nonlinear functions to linear functions with constant slopes. Thus, by having a constant slope, the model can calibrate its weights (increase or decrease them) to approach the target value.