Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic Threshold #564

Open
KaiSUN1 opened this issue Jul 28, 2024 · 1 comment
Open

Dynamic Threshold #564

KaiSUN1 opened this issue Jul 28, 2024 · 1 comment

Comments

@KaiSUN1
Copy link

KaiSUN1 commented Jul 28, 2024

I want to get adaptive threshold, but why the backpropagation threshold has no gradient, who can help me?

class ALIFNode(neuron.BaseNode):
def init(self, tau: torch.Tensor, v_threshold: float = 1.0, *args, **kwargs):
super().init(*args, **kwargs)
self.v_threshold = torch.nn.Parameter(torch.tensor(v_threshold, dtype=torch.float32, requires_grad=True))
if not isinstance(tau, torch.Tensor):
tau = torch.tensor(tau, dtype=torch.float32)
self.register_buffer('tau', tau)
self.v_reset = torch.tensor(0.0) # Assuming a reset value, you might need to adjust this

def neuronal_charge(self, x: torch.Tensor):
    self.v = self.v + (x - (self.v - self.v_reset)) / self.tau

def neuronal_fire(self):
    # print(self.v_threshold)
    return self.surrogate_function(self.v - self.v_threshold)
@frostylight
Copy link
Contributor

It seems that it works for me, you should provide the whole codes that can reproduce the bug.

class ALIFNode(neuron.BaseNode):
	def __init__(self, tau: float | torch.Tensor = 2., v_threshold: float = 1., *args, **kwargs) -> None:
		super().__init__(*args, **kwargs)

		self.v_threshold = nn.Parameter(torch.tensor(v_threshold, dtype=torch.float, requires_grad=True))
		tau = torch.tensor(tau, dtype=torch.float)
		self.register_buffer("tau", tau)

	def neuronal_charge(self, x: torch.Tensor):
		self.v = self.v + (x - (self.v - self.v_reset)) / self.tau

	def neuronal_fire(self):
		return self.surrogate_function(self.v - self.v_threshold)

torch.manual_seed(0)
an = ALIFNode()
print(an.v_threshold.grad)
result = an(torch.tensor(1))
result.backward()
print(an.v_threshold.grad)

the output is

None
tensor(-0.4200)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants