Skip to content

perf: use aligned slice access in SparkUnsafeArray bulk append#3659

Draft
andygrove wants to merge 3 commits intoapache:mainfrom
andygrove:perf/spark-unsafe-array-aligned-nullable
Draft

perf: use aligned slice access in SparkUnsafeArray bulk append#3659
andygrove wants to merge 3 commits intoapache:mainfrom
andygrove:perf/spark-unsafe-array-aligned-nullable

Conversation

@andygrove
Copy link
Member

Summary

  • Replace per-element read_unaligned() + manual pointer arithmetic with slice-based indexed access in all nullable bulk append paths (impl_append_to_builder macro, append_booleans, append_timestamps, append_dates)
  • Remove runtime alignment checks in non-nullable paths since alignment is guaranteed by SparkUnsafeArray layout, always use append_slice bulk copy
  • Add debug_assert to verify the alignment invariant

Rationale

SparkUnsafeArray guarantees natural alignment for element data: the header is 8 + ceil(n/64)*8 bytes (always 8-byte aligned), and elements are at element_size stride from the aligned base. The nullable paths were previously doing per-element ptr.read_unaligned() with ptr = ptr.add(1), while the non-nullable paths had runtime alignment checks with fallback to unaligned reads. Since alignment is guaranteed, all paths can use slice access, which gives better codegen (no manual pointer arithmetic, compiler can reason about bounds).

Test plan

  • cargo clippy --all-targets --workspace -- -D warnings passes
  • Existing row-to-columnar and array element append tests/benchmarks cover these code paths

SparkUnsafeArray guarantees natural alignment for element data: the
header is 8 + ceil(n/64)*8 bytes (always 8-byte aligned), and elements
are at element_size stride from the aligned base. This means we can
create a slice upfront and use indexed access instead of per-element
read_unaligned() with manual pointer arithmetic.

Changes:
- Nullable paths in impl_append_to_builder macro, append_booleans,
  append_timestamps, and append_dates now create a slice upfront and
  iterate with enumerate, eliminating read_unaligned and ptr.add(1)
- Non-nullable paths simplified: removed runtime alignment checks since
  alignment is guaranteed, always use append_slice bulk copy
- Added debug_assert for alignment invariant
@andygrove andygrove marked this pull request as draft March 11, 2026 00:14
SparkUnsafeArray elements may not be naturally aligned (e.g., i64 at
4-byte offset). Restore runtime alignment check: use slice-based access
when aligned, fall back to per-element read_unaligned when not. The
nullable path now has the same aligned fast path as non-nullable, which
was the original goal of this PR.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant