FORMATION AND DEVELOPMENT STAGES OF THE NATURAL LANGUAGE PROCESSING (NLP) FIELD
Downloads
This article systematically analyzes the formation and development stages of the natural language processing field. The study
covers the main evolutionary periods in NLP history, including rule-based and grammatical approaches, semantic and symbolic
models, statistical methods, as well as deep learning and large pre-trained language models, based on scientific sources. In the
literature review process, the initial ideas of machine translation, conceptual ontologies, corpus-based statistical models, and the
formation of modern neural architectures are analyzed sequentially. Additionally, the contributions of the attention mechanism,
Transformer architecture, and transfer learning approaches to the development of NLP are highlighted. The conducted analysis
serves as an important scientific foundation for identifying the theoretical and methodological bases of modern NLP systems and
for developing effective automatic text analysis models for low-resource languages, particularly the Uzbek language.
1. Liddy E.D. Natural Language Processing // Encyclopedia of Library and Information Science. – New York: Marcel Dekker,
2001. – P. 2126–2136.
2. Thistlethwaite F., Dostert L. The Great Experiment: Machine Translation. – Washington, DC, 1955. – 98 p.
3. Charniak E. Passing markers: A theory of contextual influence in language comprehension // Cognitive Science. – 1983. –
Vol. 7, No. 3. – P. 171–190.
4. Brown P.F. et. al. Word-sense disambiguation using statistical methods // Proceedings of the 29th Annual Meeting of the
Association for Computational Linguistics. – 1991. – P. 264–270.
5. Bengio Y. et. al. A neural probabilistic language model // Journal of Machine Learning Research. – 2003. – Vol. 3. – P. 1137–
1155.
6. Bahdanau D. et. al. Neural machine translation by jointly learning to align and translate // arXiv preprint. – 2014. –
arXiv:1409.0473.
7. Daniluk M. et. al. Frustratingly short attention spans in neural language modeling // arXiv preprint. – 2017. –
arXiv:1702.04521.
8. Collobert R., Weston J. A unified architecture for natural language processing // Proceedings of the 25th International
Conference on Machine Learning (ICML). – 2008. – P. 160–167.
9. Socher R. et. al. Recursive deep models for semantic compositionality over a sentiment treebank // Proceedings of EMNLP
2013. – P. 1631–1642.
10. Tai K.S. et. al. Improved semantic representations from tree-structured LSTM networks // arXiv preprint. – 2015. –
arXiv:1503.00075.
11. Sutskever I. et. al. Sequence to sequence learning with neural networks // Advances in Neural Information Processing
Systems. Vol. 27. 2014. – P. 3104–3112.
12. Devlin J. et. al. BERT: Pre-training of deep bidirectional transformers for language understanding // Proceedings of NAACL-
HLT 2019. – P. 4171–4186.
Copyright (c) 2026 «ACTA NUUz»

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


.jpg)

1.png)






