need to investigate possibility and rationality of adding a caching mechanism. so, if we would get a two identical texts, we will just load data from cache (if the confidence is OK)
for example, yesterday we had a review with text "Good". doain-model processed it with high confidence and such text gets cached, so next time we get SAME text, we just load results from cache
need to investigate possibility and rationality of adding a caching mechanism. so, if we would get a two identical texts, we will just load data from cache (if the confidence is OK)
for example, yesterday we had a review with text "Good".
doain-modelprocessed it with high confidence and such text gets cached, so next time we get SAME text, we just load results from cache