Added leaky rectified linear algorithm#6260
Added leaky rectified linear algorithm#6260atomicsorcerer wants to merge 3 commits intoTheAlgorithms:masterfrom
Conversation
|
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
| Script inspired from its corresponding Wikipedia article | ||
| https://en.wikipedia.org/wiki/Rectifier_(neural_networks) | ||
| """ | ||
| from __future__ import annotations |
There was a problem hiding this comment.
Is the __future__ import needed here?
| def leaky_relu( | ||
| vector: float | list[float], negative_slope: float = 0.01 | ||
| ) -> float | list[float]: |
There was a problem hiding this comment.
| def leaky_relu( | |
| vector: float | list[float], negative_slope: float = 0.01 | |
| ) -> float | list[float]: | |
| def leaky_relu(vector: np.ndarray, negative_slope: float = 0.01) -> np.ndarray: |
I think just type hinting it as np.ndarray is fine for this function. Using numpy arrays is pretty standard when it comes to NN-related Python programming, and numpy functions that take in arrays generally also support scalars as well ("array_like")
| if isinstance(vector, int): | ||
| raise ValueError( | ||
| "leaky_relu() only accepts floats or a list of floats for vector" | ||
| ) |
There was a problem hiding this comment.
Why do we not want to support ints as input? They can all be cast to floats as output
| if not isinstance(negative_slope, float): | ||
| raise ValueError("leaky_relu() only accepts a float value for negative_slope") |
There was a problem hiding this comment.
I think the constraints on the possible range for the negative slope should be clearer. Are we restricting it to a float between 0 and 1? If so, that should be the if-condition instead
| if isinstance(vector, float): | ||
| if vector < 0: | ||
| return vector * negative_slope | ||
| return vector | ||
|
|
||
| for index, value in enumerate(vector): | ||
| if value < 0: | ||
| vector[index] = value * negative_slope | ||
|
|
||
| return vector |
There was a problem hiding this comment.
| if isinstance(vector, float): | |
| if vector < 0: | |
| return vector * negative_slope | |
| return vector | |
| for index, value in enumerate(vector): | |
| if value < 0: | |
| vector[index] = value * negative_slope | |
| return vector | |
| return np.maximum(vector, negative_slope * vector) |
numpy functions can handle these cases very easily. Also, leaky ReLU is equivalent to
|
Closing in favor of #8962 |
Describe your change:
Added the leaky rectified linear algorithm (also known as leaky ReLU). Leaky ReLU is an alternative to normal ReLU because it solves the dying ReLU problem, which is an issue in some neural networks.
Checklist:
Fixes: #{$ISSUE_NO}.