-
-
Notifications
You must be signed in to change notification settings - Fork 170
Reduce bundle size below Obsidian Sync limit #1073
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThe changes modernize the tokenization approach in AIAssistant using Tiktoken with local caching, remove a deprecated OpenAI model, add clarifying UI notes about token count accuracy, introduce extensive responsive CSS styling rules, and update the TypeScript module resolution strategy. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
main.jssrc/ai/AIAssistant.tssrc/ai/Provider.tssrc/gui/MacroGUIs/AIAssistantCommandSettingsModal.tssrc/gui/MacroGUIs/AIAssistantInfiniteCommandSettingsModal.tsstyles.csstsconfig.json
💤 Files with no reviewable changes (1)
- src/ai/Provider.ts
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{ts,tsx,mts,mjs,js,json}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,mts,mjs,js,json}: Use tab indentation with width 2 in TypeScript and configuration files (enforced by Biome).
Follow an 80-character line guide (enforced by Biome).
Files:
src/gui/MacroGUIs/AIAssistantCommandSettingsModal.tssrc/gui/MacroGUIs/AIAssistantInfiniteCommandSettingsModal.tssrc/ai/AIAssistant.tstsconfig.json
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: Use camelCase for variables and functions in TypeScript.
Prefer type-only imports in TypeScript.
Files:
src/gui/MacroGUIs/AIAssistantCommandSettingsModal.tssrc/gui/MacroGUIs/AIAssistantInfiniteCommandSettingsModal.tssrc/ai/AIAssistant.ts
**/*.{ts,tsx,svelte}
📄 CodeRabbit inference engine (AGENTS.md)
Use PascalCase for classes and Svelte components.
Files:
src/gui/MacroGUIs/AIAssistantCommandSettingsModal.tssrc/gui/MacroGUIs/AIAssistantInfiniteCommandSettingsModal.tssrc/ai/AIAssistant.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Route logging through the
loggerutilities for consistent output.
Files:
src/gui/MacroGUIs/AIAssistantCommandSettingsModal.tssrc/gui/MacroGUIs/AIAssistantInfiniteCommandSettingsModal.tssrc/ai/AIAssistant.ts
{main.js,styles.css}
📄 CodeRabbit inference engine (AGENTS.md)
Bundled artifacts
main.jsandstyles.cssat the repo root should be generated, not hand-edited.
Files:
styles.css
🧠 Learnings (5)
📚 Learning: 2025-12-21T07:54:34.875Z
Learnt from: CR
Repo: chhoumann/quickadd PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T07:54:34.875Z
Learning: Applies to **/*.{ts,tsx} : Prefer type-only imports in TypeScript.
Applied to files:
tsconfig.json
📚 Learning: 2025-12-21T07:54:34.875Z
Learnt from: CR
Repo: chhoumann/quickadd PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T07:54:34.875Z
Learning: Applies to src/main.ts : Preserve the hand-ordered imports in `src/main.ts`; disable auto-sorting there.
Applied to files:
tsconfig.json
📚 Learning: 2025-12-21T07:54:34.875Z
Learnt from: CR
Repo: chhoumann/quickadd PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T07:54:34.875Z
Learning: Use `bun run build`: run `tsc --noEmit` then produce the production bundle.
Applied to files:
tsconfig.json
📚 Learning: 2025-12-21T07:54:34.875Z
Learnt from: CR
Repo: chhoumann/quickadd PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T07:54:34.875Z
Learning: Use `bun run dev`: watch-mode bundle via `esbuild.config.mjs`, regenerating `main.js` as you edit.
Applied to files:
tsconfig.json
📚 Learning: 2025-12-21T07:54:34.875Z
Learnt from: CR
Repo: chhoumann/quickadd PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T07:54:34.875Z
Learning: Applies to {main.js,styles.css} : Bundled artifacts `main.js` and `styles.css` at the repo root should be generated, not hand-edited.
Applied to files:
styles.css
🧬 Code graph analysis (1)
src/ai/AIAssistant.ts (1)
src/ai/Provider.ts (1)
Model(14-17)
🔇 Additional comments (7)
src/gui/MacroGUIs/AIAssistantCommandSettingsModal.ts (1)
182-189: LGTM - Clear user-facing note about token count accuracy.The note accurately informs users that token counts are exact for OpenAI models and estimated for others, aligning with the
cl100k_basefallback logic inAIAssistant.ts. The styling is consistent with existing UI patterns.src/gui/MacroGUIs/AIAssistantInfiniteCommandSettingsModal.ts (1)
146-153: LGTM - Consistent with AIAssistantCommandSettingsModal.ts.The UI note mirrors the implementation in the sibling modal, maintaining consistency across the plugin's AI assistant interfaces.
src/ai/AIAssistant.ts (4)
1-4: Good choice using js-tiktoken/lite for bundle size reduction.The selective imports of only
cl100k_baseando200k_baseranks minimizes bundle size while supporting modern OpenAI models.o200k_basecovers GPT-4o and newer models;cl100k_basecovers GPT-4, GPT-3.5-turbo, and serves as a reasonable fallback.
21-35: Solid caching pattern for Tiktoken instances.The module-level cache prevents repeated instantiation of
Tiktokenencoders. Since only two encodings are bundled, memory footprint is minimal. The defensive fallback on line 28 provides an extra safety net.
37-53: Token counting logic correctly handles model encoding fallbacks.The normalization of legacy encodings (
p50k_*,r50k_base,gpt2) tocl100k_baseis appropriate since those ranks aren't bundled. Combined with the UI note about estimates for non-OpenAI models, users are informed when counts may be approximate.One minor observation: The
try-catchon lines 40-44 already setsencodingName = "cl100k_base"on error, and line 39 initializes it to"cl100k_base". The initialization could be removed since the catch block handles the fallback, but the current approach is more explicit.
55-69: Good defensive validation forrepeatUntilResolved.The input validation helps catch programming errors early with clear error messages.
styles.css (1)
1-1: Generated artifact - no manual review required.Per coding guidelines,
styles.cssis a bundled artifact that should be generated via the build process, not hand-edited. The minified output includes appropriate responsive styles, accessibility media queries (prefers-reduced-motion,prefers-contrast), and theme support.
Summary
This PR reduces QuickAdd’s bundled
main.jssize to fit within Obsidian Sync Standard’s per-file limit, which prevents the plugin from syncing and enabling on mobile when the bundle is too large.Motivation / Context
QuickAdd’s production bundle exceeded the Sync Standard cap, causing
main.jsto be skipped during sync and the plugin to fail loading on mobile. The largest contributor wasjs-tiktoken. We need to keep a single-file bundle (Obsidian plugins require this), so the solution focuses on slimming the token-counting dependency.What changed
js-tiktoken/liteand included only the required OpenAI ranks (cl100k_base,o200k_base).text-davinci-003from default OpenAI models (deprecated; requires additional ranks).tsconfig.jsontomoduleResolution: "bundler"so TypeScript can resolvejs-tiktokensubpath exports.Result
main.jssize drops from ~6.11 MB to ~4.23 MB (under the Sync Standard limit).Testing
bun run buildbun run testScreenshots
N/A (text-only UI note)
Closes #1071