Vary: 0 to 1. Used within the output layer for binary classification issues.
import math
from typing import Uniondef calculate_sigmoid(z: Union[int, float]) -> float:
"""
Calculate the sigmoid of a given enter.
The sigmoid operate is outlined as 1 / (1 + exp(-z)).
Args:
z (Union[int, float]): The enter worth for which to calculate the sigmoid.
Returns:
float: The sigmoid of the enter worth.
Raises:
TypeError: If the enter will not be an integer or a float.
"""
if not isinstance(z, (int, float)):
increase TypeError("Enter have to be an integer or a float")
strive:
return 1 / (1 + math.exp(-z))
besides OverflowError:
return 0.0 if z < 0 else 1.0
if __name__ == "__main__":
print(calculate_sigmoid(0)) # Output: 0.5
print(calculate_sigmoid(2)) # Output: 0.8807970779778823
print(calculate_sigmoid(-1000)) # Output: 0.0 (handles overflow)
print(calculate_sigmoid(1000)) # Output: 1.0 (handles overflow)
Vary: -1 to 1. Sometimes utilized in hidden layers.
import math
from typing import Uniondef calculate_tanh(z: Union[int, float]) -> float:
"""
Calculate the hyperbolic tangent (tanh) of a given enter.
The tanh operate is outlined as (exp(z) - exp(-z)) / (exp(z) + exp(-z)).
Args:
z (Union[int, float]): The enter worth for which to calculate the tanh.
Returns:
float: The tanh of the enter worth.
Raises:
TypeError: If the enter will not be an integer or a float.
"""
if not isinstance(z, (int, float)):
increase TypeError("Enter have to be an integer or a float")
strive:
exp_z = math.exp(z)
exp_neg_z = math.exp(-z)
return (exp_z - exp_neg_z) / (exp_z + exp_neg_z)
besides OverflowError:
# For giant optimistic or unfavourable values, tanh approaches 1 or -1
return 1.0 if z > 0 else -1.0
# Instance utilization
if __name__ == "__main__":
print(calculate_tanh(1)) # Output: 0.7615941559557649
print(calculate_tanh(0)) # Output: 0.0
print(calculate_tanh(-1)) # Output: -0.7615941559557649
print(calculate_tanh(1000)) # Output: 1.0 (handles overflow)
print(calculate_tanh(-1000)) # Output: -1.0 (handles overflow)
Vary: 0 to ∞. Extensively used on account of its simplicity and effectiveness in deep studying.
from typing import Uniondef calculate_relu(z: Union[int, float]) -> float:
"""
Calculate the Rectified Linear Unit (ReLU) of a given enter.
The ReLU operate is outlined as max(0, z).
Args:
z (Union[int, float]): The enter worth for which to calculate the ReLU.
Returns:
float: The ReLU of the enter worth.
Raises:
TypeError: If the enter will not be an integer or a float.
"""
if not isinstance(z, (int, float)):
increase TypeError("Enter have to be an integer or a float")
return max(0, z)
# Instance utilization
if __name__ == "__main__":
print(calculate_relu(1)) # Output: 1
print(calculate_relu(-1)) # Output: 0
print(calculate_relu(0)) # Output: 0
print(calculate_relu(3.5)) # Output: 3.5
Vary: -∞ to ∞. Helps to keep away from the “dying ReLU” drawback by permitting a small gradient when the unit will not be lively.
alpha = 0.01, LeakyReLU(x)=max(alpha∗x,x)
LeakyReLU(x)=max(0.1∗x,x)
from typing import Uniondef calculate_leaky_relu(z: Union[int, float]) -> float:
"""
Calculate the Leaky Rectified Linear Unit (Leaky ReLU) of a given enter.
The Leaky ReLU operate is outlined as max(0.1 * z, z).
Args:
z (Union[int, float]): The enter worth for which to calculate the Leaky ReLU.
Returns:
float: The Leaky ReLU of the enter worth.
Raises:
TypeError: If the enter will not be an integer or a float.
"""
if not isinstance(z, (int, float)):
increase TypeError("Enter have to be an integer or a float")
return max(0.1 * z, z)
# Instance utilization
if __name__ == "__main__":
print(calculate_leaky_relu(1)) # Output: 1
print(calculate_leaky_relu(-1)) # Output: -0.1
print(calculate_leaky_relu(0)) # Output: 0
Activation capabilities are integral to the functioning of neural networks, enabling them to study from complicated information and make correct predictions. Choosing the proper activation operate can considerably influence the efficiency and effectiveness of your neural community mannequin.
For extra insights and updates on machine studying and neural networks, keep tuned to our weblog!