Modern language processing systems handle billions of translation requests daily, processing text across hundreds of language pairs. Yet achieving human-level accuracy remains elusive - translation quality metrics show that even state-of-the-art systems achieve only 30-40% accuracy on nuanced technical content and struggle with context-dependent meaning.

The fundamental challenge lies in capturing the complex interplay between syntax, semantics, and domain-specific terminology while maintaining coherent document structure across languages.

This page brings together solutions from recent research—including hybrid neural architectures that combine general and domain-specific translation, preprocessing systems that simplify source text complexity, and methods for disentangling syntax from conceptual meaning in low-resource languages. These and other approaches focus on improving translation quality for technical and specialized content while maintaining processing efficiency at scale.

1. Method for Multi-Language Machine Translation Using Cross-Language Representation Pre-Training and Domain-Specific Transformation

KUNMING UNIV OF SCIENCE AND TECHNOLOGY, KUNMING UNIVERSITY OF SCIENCE AND TECHNOLOGY, 2024

A method for improving multi-language and multi-domain machine translation, especially for low-resource and zero-resource languages, using pre-training, two-stage training, and back translation techniques. The method involves: 1) Pre-training a cross-language representation model XLM-R on a large multilingual corpus. 2) Adapting the pre-trained XLM-R for specific languages and domains using a new domain-specific transformation (CDSTX) to improve low-resource domain adaptation. 3) Fine-tuning the adapted XLM-R for translation tasks using back translation to further improve performance.

2. Sentence Translation Method Utilizing Bidirectional Long Short-Term Memory Neural Networks with Dual Sequence Context Integration

AGRICULTURAL BANK OF CHINA, 2024

A sentence translation method using bidirectional long short-term memory (LSTM) neural networks to improve accuracy and reduce difficulty compared to conventional machine translation. The method involves feeding a source sentence into a bidirectional LSTM to split it into two sequences, one in normal order and one reversed. The LSTM captures context between words using hidden states. It combines hidden states from both sequences at each step to generate a context vector. The context vectors and prior translations determine the translation at each step. This allows associating phrases with context and improves accuracy compared to just translating one sequence.

CN117709368A-patent-drawing

3. Natural Language Processing System with Synapper Model for Multidimensional Sentence Representation

KIM MINGOO, 2024

Natural language processing using a synapper model unit that enables accurate translation and sentence parsing across multiple languages without requiring separate language-specific training. The method involves converting words into neural concept codes, feeding them into a language processing unit using a synapper model, parsing the codes, and converting back to words. This synapper model provides a unified structural representation of sentences across languages by integrating their different grammatical structures into multiple dimensions.

4. Machine Translation Method with Domain-Specific Noun Identification and Separate Translation Using Forward Maximum Matching

PING AN TECH SHENZHEN CO LTD, PING AN TECHNOLOGY CO LTD, 2023

Accurate machine translation method that improves translation of technical terms and proper nouns. It involves determining the domain-specific nouns in the source language text using forward maximum matching, translating those nouns separately, and replacing them in the translation. This improves accuracy for technical terms compared to using the standard translation model alone. The translation model is trained using filtered samples with optimal word ratios.

WO2023240839A1-patent-drawing

5. Automated Machine Translation System with Source Text Preprocessing and Transformational Grammar

IQVIA Inc., 2023

Automated machine translation system that improves translation quality by preprocessing the source text before translation. The preprocessing involves steps like sentence splitting, simplification, named entity recognition, matching against existing translations, and applying transformational grammar. The goal is to generate semantically meaningful, less complex ordered tokens from the source text that are easier to translate accurately.

6. Cross-Lingual Language Model Adaptation via Disentangled Syntax and Shared Conceptual Latent Space

Gnani Innovations Private Limited, 2023

Cross-lingual adaptation of language models for low-resource languages using disentangled syntax and shared conceptual latent space. The method involves converting multilingual input sentences into linearized constituency parse trees, masking leaf nodes to separate semantics, passing to a syntactic encoder, and determining if a new language is being learned. For similar scripts, transliteration aligns with syntax. For unique scripts, pseudo translation aligns semantics. This disentangles syntax from concepts, leverages relatedness for adaptation, and improves low-resource cross-lingual performance.

US20230394250A1-patent-drawing

7. Machine Learning-Based System for Sentence Translation Between Natural Languages

IQVIA Inc., 2023

Automated language translation using machine learning techniques to convert sentences from one natural language (like English) into another (like French). The translation is done by training a machine learning model on large datasets of translated sentences to learn the mapping between the source and target languages. The model can then accurately translate new, unseen sentences in the source language into the target language.

8. Machine Translation Method with Deep Learning and Attention-Based Neural Network Model

INSPUR CLOUD INF TECH CO LTD, INSPUR CLOUD INFORMATION TECHNOLOGY CO LTD, 2023

A machine translation method using deep learning and attention models to improve the accuracy and efficiency of machine translation compared to existing methods like rule-based and statistical translation. The method involves training a neural network translation model using bilingual corpora, then deploying it to translate text. Attention mechanisms are used to focus the network on important parts of the input sequence during decoding. This allows the model to better capture and transfer meaning between languages. The attention weights can also be used to analyze the translation process. The method also includes steps like custom word segmentation to improve input processing.

CN117094331A-patent-drawing

9. Neural Machine Translation with Context Filtering Attention in Transformer Encoder

KUNMING UNIV OF SCIENCE AND TECHNOLOGY, KUNMING UNIVERSITY OF SCIENCE AND TECHNOLOGY, 2023

Chapter-level neural machine translation method that screens out irrelevant or weakly related words from chapter context to improve translation accuracy. It introduces a context filtering attention module in the Transformer encoder that gradually filters out strongly related words in the chapter context while encoding the current sentence vocabulary. This allows incorporating more relevant context words into the sentence encoding.

10. Machine Translation System with Domain Classification and Reinforcement Learning-Enhanced Encoder-Decoder Architecture

BEIJING BAIFENDIAN SCIENCE & TECH GROUP CO LTD, BEIJING BAIFENDIAN SCIENCE & TECHNOLOGY GROUP CO LTD, 2023

Adaptive machine translation using reinforcement learning and domain classification to improve translation quality, especially for complex languages and specific domains. The method involves training a shared encoder-decoder transformer model on parallel corpora for multiple languages. Before feeding the source text to the encoder, it first passes through a multi-classification neural network to classify the domain. This allows distinguishing linguistic knowledge in specific fields. The classified domain is mapped to a specific encoder head for training. The cross-over loss function is used to learn domain-specific representations. Reinforcement learning techniques like rugged loss and feedback error are applied to encourage exploration and improve the model based on user feedback.

CN116976361A-patent-drawing

11. Hierarchical Neural Network Configuration for Document Translation via Word Location, Meaning, and Grammar Mappings

AVODAH, INC., 2023

Configuring multiple neural networks to translate documents with hierarchical structure by generating mappings between word locations, meanings, and grammatical rules in the source and destination languages. This provides a way to improve translation accuracy and efficiency by capturing the hierarchical structure of the languages and training neural networks to make corrections based on those mappings. The method involves collecting data in the destination language, generating mappings for word locations, meanings, and grammatical rules, training neural networks on corrections based on those mappings, and using the trained networks to translate documents.

12. Neural Network Configuration for Document Translation Using Hierarchical Structure Mapping

AVODAH, INC., 2023

Configuring neural networks for document translation that improves accuracy by leveraging the hierarchical structure of documents. The method involves generating hierarchical mappings for the source and destination languages, including mappings between word locations, grammatical info, and grammatical rules. Corrections are generated based on these mappings, trained into neural networks, and used to translate documents. This automated hierarchy analysis and correction training allows the networks to identify and implement similar fixes across the document for more natural translation.

13. Machine Translation System Utilizing Deep Learning with Recurrent Neural Networks and Transformer Architecture

ZHENGZHOU UNIV OF LIGHT INDUSTRY, ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY, 2023

Machine translation system using deep learning that aims to improve efficiency and accuracy over traditional rule-based or statistical machine translation methods. The system leverages deep learning techniques like recurrent neural networks (RNNs) and transformers to provide better contextual modeling of language. It uses a data preparation module, model selection unit, feature representation unit, model training module, evaluation and tuning module, and deployment and application module to train and optimize the translation models. The transformer architecture allows parallel computation and capturing long-distance dependencies compared to sequential RNNs. This provides better computational efficiency and modeling capabilities for translation.

CN116861929A-patent-drawing

14. Neural Network Translation System with Dynamic Model Retraining Using Ranked Parallel Corpus Matches

Amazon Technologies, Inc., 2023

Machine translation system that customizes neural network translation models for specific text segments by dynamically retraining the model based on similar examples. When translating a new text, the system identifies matching phrases from a parallel corpus and ranks them based on applicability. It then retrains the neural network using the top-ranking matches to further customize the translation for those segments. This allows better translation accuracy for unique phrases that may not have been covered in the initial training.

US11769019B1-patent-drawing

15. Neural Machine Translation System with Adapter Layers for Decoupled Language and Domain Feature Spaces

BEIJING INSTITUTE OF TECH, BEIJING INSTITUTE OF TECHNOLOGY, 2023

Multilingual, multidomain neural machine translation that leverages adapter layers to decouple language and domain feature spaces, allowing cross-lingual sharing of domain knowledge. Adapters are inserted inside the encoder-decoder model to mine language and domain knowledge separately. This decoupling enables transferring domain knowledge from a source language to a target language even if there's no parallel corpus in the target domain.

CN116245116A-patent-drawing

16. Deep Learning-Based Machine Translation Method with Preprocessing and Multilingual Model Fine-Tuning

TIANJIN OPTICAL ELECTRICAL COMMUNICATION TECH CO LTD, TIANJIN OPTICAL ELECTRICAL COMMUNICATION TECHNOLOGY CO LTD, 2023

Machine translation design method using deep learning that improves translation quality for small languages with limited data. The method involves preprocessing corpus data, pre-training a multilingual model, and fine-tuning the pre-trained model on specific language pairs. The preprocessing steps include segmenting text, word segmentation, language marking, and random replacement. This reduces corpus data dependence and allows better translation with less data.

CN116187350A-patent-drawing

17. Neural Machine Translation System with Cross-Level Attention and Document Structure Integration

Soochow University, SOOCHOW UNIVERSITY, 2023

Neural machine translation system for accurately and efficiently translating documents using a cross-level attention mechanism that leverages the context of entire documents during translation. The system preprocesses the corpus to add document structure information. It then trains a basic translation model using this structured corpus. During translation, it calculates dependency weights between words and sentences to get global context. It combines sentence vectors with weighted context to accurately select the context for each word. This allows using full document structure instead of just sentence context.

CN113705168B-patent-drawing

18. Multilingual Machine Translation Using Shared Semantic Representation with Pseudo-Parallel Data and Parameter Management Techniques

Beijing Lanzhou Technology Co., Ltd., 2023

Building multilingual machine translation models that can accurately translate between languages while avoiding off-target translation issues. The method involves converting each language's semantics to a shared representation, translating from that space, then converting back. This allows leveraging shared parameters for better representation while avoiding issues with unique semantics. The method also uses techniques like pseudo-parallel data, pre-training, freezing parameters, and self-study to improve multilingual translation.

19. Neural Machine Translation System with Context-Sensitive Routing Algorithm for Sentence Encoding

Beijing Institute of Technology, BEIJING INSTITUTE OF TECHNOLOGY, 2023

Neural machine translation system for more accurate and coherent translation of longer texts like chapters by using a routing algorithm to selectively incorporate context information into the translation process. The system encodes the current sentence and surrounding sentences using self-attention and linear layers. A routing algorithm calculates a gate using the current sentence to screen the context information. This filtered context is fused with the current sentence encoding and passed through a decoder to generate the translation. The routing algorithm allows the current sentence to actively select and combine relevant context information for more coherent translations of longer texts.

20. Natural Language Processing Device and Program with Selective Bilingual Data Utilization for Translation Model Training and Data Augmentation

JAPAN BROADCASTING CORP, NIPPON HOSO KYOKAI <NHK, 2022

Natural language processing device and program that enables accurate machine translation by selectively using different types of bilingual data for training and data expansion. The device learns a translation model using balanced parallel data without omissions or excessive translations. It also learns a data augmentation model using only the balanced data. For translation, it uses the main model and expands with pseudo parallel data generated by translating monolingual text with the augmentation model. This avoids degradation from noisy bilingual sources.

21. Neural Machine Translation Method with Unsupervised Dependency Syntax Integration for Thai-Chinese Language Pairs

22. Term Translation System with Context-Adaptive Term Base Integration and Marker Replacement Training

23. Hybrid Neural Machine Translation System with Domain-Specific Dictionary Integration

24. Machine Translation System with Semantic Extraction and Generation Modules Using Knowledge Base

25. Neural Machine Translation with Bidirectional Dependency Self-Attention in Transformer Encoder

The patents displayed here demonstrate numerous ways to deal with challenges related to accurate translation. Some concentrate on source text preprocessing to enhance the caliber of the translation. Others modify models for languages with limited resources by using strategies such as separating syntax and semantics.

Get Full Report

Access our comprehensive collection of patents related to this technology