Skip to content

Conversation

@Enkidu93
Copy link
Collaborator

@Enkidu93 Enkidu93 commented Jan 2, 2026

Added support for capturing renderings patterns, references, and term domains. Moved to using a KeyTerm data structure rather than tuples.

(This also includes porting of recent changes in Machine sillsdev/machine#362 and sillsdev/machine#368)


This change is Reviewable

@Enkidu93 Enkidu93 requested a review from ddaspit January 2, 2026 21:51
@Enkidu93
Copy link
Collaborator Author

Enkidu93 commented Jan 2, 2026

Also, update machine.py library version

@codecov-commenter
Copy link

codecov-commenter commented Jan 7, 2026

Codecov Report

❌ Patch coverage is 76.36364% with 52 lines in your changes missing coverage. Please review.
✅ Project coverage is 90.64%. Comparing base (9868016) to head (c4d0b8b).

Files with missing lines Patch % Lines
.../corpora/test_usfm_versification_error_detector.py 4.76% 20 Missing ⚠️
...chine/corpora/usfm_versification_error_detector.py 30.00% 14 Missing ⚠️
machine/jobs/translation_file_service.py 33.33% 10 Missing ⚠️
...tion/huggingface/hugging_face_nmt_model_trainer.py 90.47% 4 Missing ⚠️
...hine/corpora/paratext_project_terms_parser_base.py 96.07% 2 Missing ⚠️
...hine/corpora/file_paratext_project_terms_parser.py 85.71% 1 Missing ⚠️
...a/paratext_project_versification_error_detector.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #257      +/-   ##
==========================================
- Coverage   90.74%   90.64%   -0.11%     
==========================================
  Files         352      354       +2     
  Lines       22337    22485     +148     
==========================================
+ Hits        20270    20381     +111     
- Misses       2067     2104      +37     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@ddaspit ddaspit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddaspit partially reviewed 11 files and all commit messages, and made 1 comment.
Reviewable status: all files reviewed, 1 unresolved discussion (waiting on @Enkidu93).


machine/corpora/key_term_row.py line 0 at r1 (raw file):
This file should be named key_term.py.

@Enkidu93 Enkidu93 requested a review from ddaspit January 13, 2026 16:27
Copy link
Collaborator Author

@Enkidu93 Enkidu93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Enkidu93 made 3 comments.
Reviewable status: 8 of 21 files reviewed, 1 unresolved discussion (waiting on @ddaspit).


machine/corpora/key_term_row.py line at r1 (raw file):

Previously, ddaspit (Damien Daspit) wrote…

This file should be named key_term.py.

Done.


machine/translation/huggingface/hugging_face_nmt_model_trainer.py line 362 at r3 (raw file):

            ).tokens()
            src_term_partial_word_tokens.remove("▁")
            src_term_partial_word_tokens.remove("\ufffc")

This is mirroring code in silnlp more-or-less exactly. I made an issue for creating a shared utility function that could do some of this. I also experimented with finding a safe way to be able to do this with non-fast tokenizers. It's something we should look into as needed but I decided that it was taking too much time.


tests/translation/huggingface/test_hugging_face_nmt_model_trainer.py line 130 at r3 (raw file):

        corpus = source_corpus.align_rows(target_corpus)

        terms_corpus = DictionaryTextCorpus(MemoryText("terms", [TextRow("terms", 1, ["telephone"])])).align_rows(

I don't love that this test doesn't really cover whether the terms are affecting the result. I just stuck this in here for code coverage (no exceptions thrown, etc.), but I couldn't adapt our one true fine-tuning test because it uses a non-fast tokenizer. I looked for alternatives but couldn't find anything that works. I did confirm in the debugger that everything was being tokenized properly. Maybe we should consider outputting some kind of artifact in ClearML (?) with the tokenized data so we have something to compare apples-to-apples to the tokenized experiment txt files in silnlp.

@Enkidu93
Copy link
Collaborator Author

Fixes #256 #240

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants