Skip to content

Conversation

@Hananel-Hazan
Copy link
Collaborator

No description provided.

n-shevko and others added 30 commits January 23, 2025 14:21
docs: Update README badges - Collaboration Network
---
updated-dependencies:
- dependency-name: jupyterlab
  dependency-version: 4.4.8
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <[email protected]>
Bumps [pillow](https://github.com/python-pillow/Pillow) from 11.2.1 to 11.3.0.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](python-pillow/Pillow@11.2.1...11.3.0)

---
updated-dependencies:
- dependency-name: pillow
  dependency-version: 11.3.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <[email protected]>
@Hananel-Hazan Hananel-Hazan requested a review from Copilot November 7, 2025 01:12
@Hananel-Hazan Hananel-Hazan merged commit a2840ad into hananel Nov 7, 2025
4 of 5 checks passed
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request implements sparse tensor support for BindsNET, enabling memory-efficient representation of neural network connections with low sparsity. The changes introduce sparse tensor capabilities across connections, features, monitors, and learning rules, along with benchmarking tools and documentation to demonstrate performance characteristics.

Key Changes

  • Added sparse tensor support to network connections, topology features, monitors, and learning rules
  • Introduced w_dtype parameter for weight precision control across connection types
  • Added benchmark scripts to compare sparse vs dense tensor performance and different precision levels
  • Updated DiehlAndCook2015 model to use MulticompartmentConnection with sparse support

Reviewed Changes

Copilot reviewed 13 out of 88 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
test/network/test_connections.py Added test coverage for SparseConnection with wmin/wmax validation
poetry.lock Updated Poetry and package dependencies (JupyterLab, Pillow, pandas, etc.)
examples/mnist/batch_eth_mnist.py Added sparse tensor and precision control support to MNIST example
examples/benchmark/sparse_vs_dense_tensors.py New benchmark comparing memory usage and runtime of sparse vs dense tensors
examples/benchmark/lowering_precision.py New benchmark measuring impact of reduced precision (float16 vs float32)
docs/source/guide/guide_part_i.rst Added comprehensive documentation for sparse tensors and precision control
bindsnet/network/topology_features.py Implemented sparse tensor support in feature classes with dtype control
bindsnet/network/topology.py Added w_dtype parameter to all connection types and simplified SparseConnection
bindsnet/network/monitors.py Added sparse parameter to enable sparse tensor recording in monitors
bindsnet/models/models.py Updated DiehlAndCook2015 to use MulticompartmentConnection with sparse support
bindsnet/learning/learning.py Added sparse tensor handling in learning rule updates
bindsnet/learning/MCC_learning.py Implemented sparse tensor support in MCC learning rules
bindsnet/evaluation/evaluation.py Added sparse tensor conversion in evaluation functions
README.md Updated badges to replace deprecated/broken links
.gitignore Added recursive pyc file pattern

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +255 to +258
if self.feature_value.is_sparse:
self.feature_value -= (
torch.bmm(source_s, target_x) * self.connection.dt
).to_sparse()
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When self.feature_value.is_sparse is True, the reduction function is skipped but the batch dimension is not reduced. This may result in incorrect tensor shapes. The update should reduce the batch dimension using self.reduction before converting to sparse: self.feature_value -= self.reduction(torch.bmm(source_s, target_x), dim=0).to_sparse() * self.connection.dt.

Copilot uses AI. Check for mistakes.
if plot:
image = batch["image"][:, 0].view(28, 28)
inpt = inputs["X"][:, 0].view(time, 784).sum(0).view(28, 28)
lable = batch["label"][0]
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo in variable name: 'lable' should be 'label'.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Comment on lines +148 to +149
warnings.warn(
f"Provided value has data type {value.dtype} but parameter w_dtype is {value_dtype}"
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The warning message refers to "parameter w_dtype" but the actual parameter name is "value_dtype". The message should say "parameter value_dtype is" instead of "parameter w_dtype is".

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

) and not isinstance(self, NoOp):
self.connection.w.clamp_(self.connection.wmin, self.connection.wmax)
if self.connection.w.is_sparse:
raise Exception("SparseConnection isn't supported for wmin\\wmax")
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exception message contains an incorrect escape sequence. The backslash should be properly escaped or the message should use a forward slash. It should be "wmin/wmax" or "wmin\\wmax" instead of "wmin\wmax".

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback


# Compute average firing rates for this label.
selected_spikes = torch.index_select(
spikes, dim=0, index=torch.tensor(indices)
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The torch.tensor(indices) call is unnecessary and may cause performance issues or device mismatches. The indices variable is already a tensor on the same device as spikes. It should be used directly: torch.index_select(spikes, dim=0, index=indices).

Suggested change
spikes, dim=0, index=torch.tensor(indices)
spikes, dim=0, index=indices

Copilot uses AI. Check for mistakes.
Comment on lines +293 to +294
torch.bmm(source_x, target_s) * self.connection.dt
).to_sparse()
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When self.feature_value.is_sparse is True, the reduction function is skipped but the batch dimension is not reduced. This may result in incorrect tensor shapes. The update should reduce the batch dimension using self.reduction before converting to sparse: self.feature_value += self.reduction(torch.bmm(source_x, target_s), dim=0).to_sparse() * self.connection.dt.

Suggested change
torch.bmm(source_x, target_s) * self.connection.dt
).to_sparse()
self.reduction(torch.bmm(source_x, target_s), dim=0).to_sparse() * self.connection.dt
)

Copilot uses AI. Check for mistakes.
):
return

# SparseConnection isn't supported for wmin\\wmax
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment contains an incorrect escape sequence. The backslash should be properly escaped or the comment should use a forward slash. It should be "wmin/wmax" or "wmin\\wmax" instead of "wmin\wmax".

Copilot uses AI. Check for mistakes.
"======================= ====================== ====================== ===================="
)
print(
"Sparse (megabytes used) Dense (megabytes used) Ratio (Sparse/Dense) % % of non zero values"
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The table header has "% % of non zero values" with duplicated percent signs. This should likely be just "% of non zero values".

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Comment on lines +303 to +305
======================= ====================== ====================== ====================
Sparse (megabytes used) Dense (megabytes used) Ratio (Sparse/Dense) % % of non zero values
======================= ====================== ====================== ====================
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The table header has "% % of non zero values" with duplicated percent signs. This should likely be just "% of non zero values".

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

import torch
from scipy.spatial.distance import euclidean
from torch.nn.modules.utils import _pair
from torch import device
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'device' is not used.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

@Hananel-Hazan
Copy link
Collaborator Author

@copilot open a new pull request to apply changes based on the comments in this thread

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants