-
Notifications
You must be signed in to change notification settings - Fork 341
merge with master #722
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge with master #722
Conversation
docs: Update README badges - Collaboration Network
--- updated-dependencies: - dependency-name: jupyterlab dependency-version: 4.4.8 dependency-type: direct:development ... Signed-off-by: dependabot[bot] <[email protected]>
Bump jupyterlab from 4.4.3 to 4.4.8
Bumps [pillow](https://github.com/python-pillow/Pillow) from 11.2.1 to 11.3.0. - [Release notes](https://github.com/python-pillow/Pillow/releases) - [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst) - [Commits](python-pillow/Pillow@11.2.1...11.3.0) --- updated-dependencies: - dependency-name: pillow dependency-version: 11.3.0 dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]>
Bump pillow from 11.2.1 to 11.3.0
…ling across multiple files
…ling and warnings
…snet into lowering_precision
…sion and batch_eth_mnist scripts
SparseConnection support
…structor calls across multiple classes.
Lowering precision
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request implements sparse tensor support for BindsNET, enabling memory-efficient representation of neural network connections with low sparsity. The changes introduce sparse tensor capabilities across connections, features, monitors, and learning rules, along with benchmarking tools and documentation to demonstrate performance characteristics.
Key Changes
- Added sparse tensor support to network connections, topology features, monitors, and learning rules
- Introduced
w_dtypeparameter for weight precision control across connection types - Added benchmark scripts to compare sparse vs dense tensor performance and different precision levels
- Updated
DiehlAndCook2015model to useMulticompartmentConnectionwith sparse support
Reviewed Changes
Copilot reviewed 13 out of 88 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| test/network/test_connections.py | Added test coverage for SparseConnection with wmin/wmax validation |
| poetry.lock | Updated Poetry and package dependencies (JupyterLab, Pillow, pandas, etc.) |
| examples/mnist/batch_eth_mnist.py | Added sparse tensor and precision control support to MNIST example |
| examples/benchmark/sparse_vs_dense_tensors.py | New benchmark comparing memory usage and runtime of sparse vs dense tensors |
| examples/benchmark/lowering_precision.py | New benchmark measuring impact of reduced precision (float16 vs float32) |
| docs/source/guide/guide_part_i.rst | Added comprehensive documentation for sparse tensors and precision control |
| bindsnet/network/topology_features.py | Implemented sparse tensor support in feature classes with dtype control |
| bindsnet/network/topology.py | Added w_dtype parameter to all connection types and simplified SparseConnection |
| bindsnet/network/monitors.py | Added sparse parameter to enable sparse tensor recording in monitors |
| bindsnet/models/models.py | Updated DiehlAndCook2015 to use MulticompartmentConnection with sparse support |
| bindsnet/learning/learning.py | Added sparse tensor handling in learning rule updates |
| bindsnet/learning/MCC_learning.py | Implemented sparse tensor support in MCC learning rules |
| bindsnet/evaluation/evaluation.py | Added sparse tensor conversion in evaluation functions |
| README.md | Updated badges to replace deprecated/broken links |
| .gitignore | Added recursive pyc file pattern |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if self.feature_value.is_sparse: | ||
| self.feature_value -= ( | ||
| torch.bmm(source_s, target_x) * self.connection.dt | ||
| ).to_sparse() |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When self.feature_value.is_sparse is True, the reduction function is skipped but the batch dimension is not reduced. This may result in incorrect tensor shapes. The update should reduce the batch dimension using self.reduction before converting to sparse: self.feature_value -= self.reduction(torch.bmm(source_s, target_x), dim=0).to_sparse() * self.connection.dt.
| if plot: | ||
| image = batch["image"][:, 0].view(28, 28) | ||
| inpt = inputs["X"][:, 0].view(time, 784).sum(0).view(28, 28) | ||
| lable = batch["label"][0] |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo in variable name: 'lable' should be 'label'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
| warnings.warn( | ||
| f"Provided value has data type {value.dtype} but parameter w_dtype is {value_dtype}" |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The warning message refers to "parameter w_dtype" but the actual parameter name is "value_dtype". The message should say "parameter value_dtype is" instead of "parameter w_dtype is".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
| ) and not isinstance(self, NoOp): | ||
| self.connection.w.clamp_(self.connection.wmin, self.connection.wmax) | ||
| if self.connection.w.is_sparse: | ||
| raise Exception("SparseConnection isn't supported for wmin\\wmax") |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The exception message contains an incorrect escape sequence. The backslash should be properly escaped or the message should use a forward slash. It should be "wmin/wmax" or "wmin\\wmax" instead of "wmin\wmax".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
|
|
||
| # Compute average firing rates for this label. | ||
| selected_spikes = torch.index_select( | ||
| spikes, dim=0, index=torch.tensor(indices) |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The torch.tensor(indices) call is unnecessary and may cause performance issues or device mismatches. The indices variable is already a tensor on the same device as spikes. It should be used directly: torch.index_select(spikes, dim=0, index=indices).
| spikes, dim=0, index=torch.tensor(indices) | |
| spikes, dim=0, index=indices |
| torch.bmm(source_x, target_s) * self.connection.dt | ||
| ).to_sparse() |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When self.feature_value.is_sparse is True, the reduction function is skipped but the batch dimension is not reduced. This may result in incorrect tensor shapes. The update should reduce the batch dimension using self.reduction before converting to sparse: self.feature_value += self.reduction(torch.bmm(source_x, target_s), dim=0).to_sparse() * self.connection.dt.
| torch.bmm(source_x, target_s) * self.connection.dt | |
| ).to_sparse() | |
| self.reduction(torch.bmm(source_x, target_s), dim=0).to_sparse() * self.connection.dt | |
| ) |
| ): | ||
| return | ||
|
|
||
| # SparseConnection isn't supported for wmin\\wmax |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment contains an incorrect escape sequence. The backslash should be properly escaped or the comment should use a forward slash. It should be "wmin/wmax" or "wmin\\wmax" instead of "wmin\wmax".
| "======================= ====================== ====================== ====================" | ||
| ) | ||
| print( | ||
| "Sparse (megabytes used) Dense (megabytes used) Ratio (Sparse/Dense) % % of non zero values" |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The table header has "% % of non zero values" with duplicated percent signs. This should likely be just "% of non zero values".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
| ======================= ====================== ====================== ==================== | ||
| Sparse (megabytes used) Dense (megabytes used) Ratio (Sparse/Dense) % % of non zero values | ||
| ======================= ====================== ====================== ==================== |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The table header has "% % of non zero values" with duplicated percent signs. This should likely be just "% of non zero values".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
| import torch | ||
| from scipy.spatial.distance import euclidean | ||
| from torch.nn.modules.utils import _pair | ||
| from torch import device |
Copilot
AI
Nov 7, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'device' is not used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
|
@copilot open a new pull request to apply changes based on the comments in this thread |
No description provided.