Ordinal PCL+: A Prompt-based Multilingual Framework for Few-shot and Zero-shot Cross-lingual Text Classification
DOI:
https://doi.org/10.54097/1rgc4x61Keywords:
Cross-lingual Text Classification, Prompt-based Learning, Ordinal Consistency, Contrastive Learning, Probability Calibration.Abstract
This paper proposes Ordinal Prompt based Cross-lingual Learning plus (Ordinal PCL+), a prompt-based multilingual framework for few-shot and zero-shot text classification. The approach inserts learnable continuous prompts, conditions token representations with a lightweight label-attention block, and performs a two-pass self-learning routine that couples pseudo-labels with representation–prototype agreement. For tasks with ordered labels, an ordinal head enforces monotonic consistency; a contrastive objective further aligns same-class instances across languages. The overall loss combines classification, agreement, ordinal, and contrastive terms and is followed by post-hoc temperature scaling to ensure well-calibrated probabilities. Evaluation on the Multilingual Amazon Reviews Corpus (MARC) uses English k-shot supervision (k {4, 8, 16, 32, 64}) with zero-shot transfer to five target languages, reporting accuracy and expected calibration error (ECE) across multiple seeds. Relative to fine-tuning and prompt-only variants, the framework consistently improves transfer while reducing calibration error and remains modular and lightweight for practical deployment on commodity hardware. The components are ablated to quantify their contributions, showing complementary gains from label attention and self-learning, consistent benefits of the ordinal branch on rating-like labels, and improved reliability after temperature scaling.
Downloads
References
[1] A. Vaswani et al., Attention is All You Need. In Adv. Neural Inf. Process. Syst. 30, 5998–6008 (2017).
[2] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proc. 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019), 4171–4186 (2019).
[3] T.B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D.M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, D. Amodei, Language Models are Few-Shot Learners. Adv. Neural Inf. Process. Syst. 33, 1877–1901 (2020).
[4] Llama Team (AI @ Meta), The Llama 3 Herd of Models. arXiv: 2407.21783 (2024).
[5] A.Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D.S. Chaplot, D. de las Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier, L.R. Lavaud, M.-A. Lachaux, P. Stock, T. Le Scao, T. Lavril, T. Wang, T. Lacroix, W. El Sayed, Mistral 7B. arXiv: 2310.06825 (2023).
[6] Gemma Team, Gemma: Open Models Based on Gemini Research and Technology. arXiv: 2403.08295 (2024).
[7] Y. Wu, J. Wan, A survey of text classification based on pre-trained language model. Neurocomputing 616, 128921 (2025).
[8] M. Kim, W. Song, S. Lee, J. Park, D. Kim, RA-LoRA: Rank-Adaptive Parameter-Efficient Fine-Tuning for Accurate 2-bit Quantized Large Language Models. Preprint, available online (2024).
[9] S. Bao, X. Xu, L. Chen, X. Xu, J. Zhang, Z. Han, S. Xu, CiCo: Domain-Aware Sign Language Retrieval via Cross-Lingual Contrastive Learning. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 102–112 (2023).
[10] Y. Chen, H. Yang, Y. Li, Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation. arXiv: 2201.08702 (2022).
[11] A. Kostina, M.D. Dikaiakos, D. Stefanidis, G. Pallis, Large Language Models for Text Classification: Case Study and Comprehensive Review. arXiv: 2501.08457 (2025).
[12] K. Feng, L. Huang, K. Wang, W. Wei, R. Zhang, Prompt-based learning framework for zero-shot cross-lingual text classification. Eng. Appl. Artif. Intell. 133, 108481 (2024).
[13] S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M.A. Chenaghlu, J. Gao, Deep learning-based text classification: A comprehensive review. arXiv: 2004.03705 (2020).
[14] J. Šmíd, P. Král, Cross-lingual aspect-based sentiment analysis: A survey on tasks, approaches, and challenges. Inf. Fusion 120, 103073 (2025).
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Academic Journal of Science and Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.








