Activation functions
Ahoy there matey! Been a while since I penned something down (immediately channels my inner lady Whistledown). Today we are setting sail into the world of activation functions. What are they? What types are there? We are about to find out. So buckle up!
Whoa whoa whoa, wait don't leave just yet they are not that hard I promise, just keep reading and you will see.
What is a neural network and what is a neuron?
Before we get into activation functions , you need to understand what a neural network and a neuron is. Think of a neural network like a brain, made up of tiny units called neurons and these neurons help process information and pass it along to the next layer of neurons, helping the network learn and make decisions.
What about activation functions?
So where do activation functions come in you ask? Activation functions decide whether a neuron should be initiated or not. There are usually mathematical functions that make this decision , each neuron receives an input , processes it and sends the output to the next layer if the output is above a particular threshold then the neuron is fired if otherwise then the neuron is not fired. This helps us model complex patterns. Let's say you are trying to decide whether you want orange cake or not. You might need an assessment tool to help you decide based on the knowledge you already have on hand at that point. This is where Lord Activation Function💂 comes in, in a neural network. (Okay fine you caught me, no more Bridgerton references in class.)
Here are some of the most common activation functions you will encounter while working with neural networks.
Step function
This function takes in input and will output a 1 if the value is equal to or greater than 0, however if the value is less than 0 then it will output a 0. Just the same way your bulb either turns on or off based on the switch.
Sigmoid function
A sigmoid function is better than a step function because it is more flexible than the step in the sense that it is able to produce outputs between 0 and 1. It utilises the mathematical function shown below
σ(x)=1+e−x1
RELU(Rectification Linear Unit)
Then we have this bad boy over here, Rectification Linear Unit, the Beyoncé of activation functions. Compared to the other two it is simpler and more adaptable , if a value of the output of the neuron is less than 0 then the output will be a 0 but if it is more than 0 then that value will be our output.
You can find code examples of the above activation functions here.
See? It was cool you stuck around, now you can tell your family and friends all about activation functions.Thank you for being here , see you on the next one. in case I missed any important details you are free to leave a comment.
Comments
Post a Comment